2023-07-17 22:15:10,512 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9 2023-07-17 22:15:10,531 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-17 22:15:10,549 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-17 22:15:10,549 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/cluster_2e37384a-437b-4b5a-b559-34afc86ec314, deleteOnExit=true 2023-07-17 22:15:10,549 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-17 22:15:10,550 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/test.cache.data in system properties and HBase conf 2023-07-17 22:15:10,550 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/hadoop.tmp.dir in system properties and HBase conf 2023-07-17 22:15:10,551 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/hadoop.log.dir in system properties and HBase conf 2023-07-17 22:15:10,551 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-17 22:15:10,551 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-17 22:15:10,551 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-17 22:15:10,688 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-17 22:15:11,230 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-17 22:15:11,236 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-17 22:15:11,236 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-17 22:15:11,236 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-17 22:15:11,237 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-17 22:15:11,237 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-17 22:15:11,237 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-17 22:15:11,238 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-17 22:15:11,238 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-17 22:15:11,238 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-17 22:15:11,239 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/nfs.dump.dir in system properties and HBase conf 2023-07-17 22:15:11,239 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/java.io.tmpdir in system properties and HBase conf 2023-07-17 22:15:11,239 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-17 22:15:11,239 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-17 22:15:11,240 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-17 22:15:11,884 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-17 22:15:11,890 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-17 22:15:12,263 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-17 22:15:12,455 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-17 22:15:12,471 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 22:15:12,514 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 22:15:12,547 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/java.io.tmpdir/Jetty_localhost_34021_hdfs____ljxw60/webapp 2023-07-17 22:15:12,692 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34021 2023-07-17 22:15:12,737 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-17 22:15:12,737 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-17 22:15:13,180 WARN [Listener at localhost/38457] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 22:15:13,268 WARN [Listener at localhost/38457] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-17 22:15:13,288 WARN [Listener at localhost/38457] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 22:15:13,296 INFO [Listener at localhost/38457] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 22:15:13,303 INFO [Listener at localhost/38457] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/java.io.tmpdir/Jetty_localhost_33085_datanode____gc95is/webapp 2023-07-17 22:15:13,407 INFO [Listener at localhost/38457] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33085 2023-07-17 22:15:13,960 WARN [Listener at localhost/39661] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 22:15:14,047 WARN [Listener at localhost/39661] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-17 22:15:14,058 WARN [Listener at localhost/39661] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 22:15:14,060 INFO [Listener at localhost/39661] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 22:15:14,069 INFO [Listener at localhost/39661] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/java.io.tmpdir/Jetty_localhost_33217_datanode____.r18za9/webapp 2023-07-17 22:15:14,187 INFO [Listener at localhost/39661] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33217 2023-07-17 22:15:14,196 WARN [Listener at localhost/38863] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 22:15:14,213 WARN [Listener at localhost/38863] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-17 22:15:14,216 WARN [Listener at localhost/38863] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 22:15:14,217 INFO [Listener at localhost/38863] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 22:15:14,224 INFO [Listener at localhost/38863] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/java.io.tmpdir/Jetty_localhost_44261_datanode____.pscvxo/webapp 2023-07-17 22:15:14,344 INFO [Listener at localhost/38863] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44261 2023-07-17 22:15:14,352 WARN [Listener at localhost/37695] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 22:15:14,588 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x40e0f704ee15e39b: Processing first storage report for DS-92bde3d4-91ce-4c0f-9241-ef4d25e6ef6e from datanode 68ac323e-41c1-4b73-8ba0-ab0a78db4c28 2023-07-17 22:15:14,590 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x40e0f704ee15e39b: from storage DS-92bde3d4-91ce-4c0f-9241-ef4d25e6ef6e node DatanodeRegistration(127.0.0.1:45423, datanodeUuid=68ac323e-41c1-4b73-8ba0-ab0a78db4c28, infoPort=42771, infoSecurePort=0, ipcPort=37695, storageInfo=lv=-57;cid=testClusterID;nsid=1770773528;c=1689632111991), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-17 22:15:14,590 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x10a606b23be376ca: Processing first storage report for DS-3d412bac-d6ee-40f8-b24e-fa4cf5d7d6ec from datanode 63306b39-16dd-4949-bc27-618b5c64090d 2023-07-17 22:15:14,590 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x10a606b23be376ca: from storage DS-3d412bac-d6ee-40f8-b24e-fa4cf5d7d6ec node DatanodeRegistration(127.0.0.1:44577, datanodeUuid=63306b39-16dd-4949-bc27-618b5c64090d, infoPort=33907, infoSecurePort=0, ipcPort=38863, storageInfo=lv=-57;cid=testClusterID;nsid=1770773528;c=1689632111991), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 22:15:14,590 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x40278e870acbf982: Processing first storage report for DS-8d3ce80f-cbe8-4b6e-94f2-9c9f4fe1c3b4 from datanode 3621efb1-40cf-4c42-b131-74ca6a4c5501 2023-07-17 22:15:14,591 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x40278e870acbf982: from storage DS-8d3ce80f-cbe8-4b6e-94f2-9c9f4fe1c3b4 node DatanodeRegistration(127.0.0.1:44355, datanodeUuid=3621efb1-40cf-4c42-b131-74ca6a4c5501, infoPort=43483, infoSecurePort=0, ipcPort=39661, storageInfo=lv=-57;cid=testClusterID;nsid=1770773528;c=1689632111991), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-17 22:15:14,591 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x40e0f704ee15e39b: Processing first storage report for DS-fca758b9-6f0e-4b21-a5c3-e096fcdfa55c from datanode 68ac323e-41c1-4b73-8ba0-ab0a78db4c28 2023-07-17 22:15:14,591 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x40e0f704ee15e39b: from storage DS-fca758b9-6f0e-4b21-a5c3-e096fcdfa55c node DatanodeRegistration(127.0.0.1:45423, datanodeUuid=68ac323e-41c1-4b73-8ba0-ab0a78db4c28, infoPort=42771, infoSecurePort=0, ipcPort=37695, storageInfo=lv=-57;cid=testClusterID;nsid=1770773528;c=1689632111991), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 22:15:14,591 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x10a606b23be376ca: Processing first storage report for DS-910ae5f1-b366-4cd3-bba5-53b35b83cb37 from datanode 63306b39-16dd-4949-bc27-618b5c64090d 2023-07-17 22:15:14,591 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x10a606b23be376ca: from storage DS-910ae5f1-b366-4cd3-bba5-53b35b83cb37 node DatanodeRegistration(127.0.0.1:44577, datanodeUuid=63306b39-16dd-4949-bc27-618b5c64090d, infoPort=33907, infoSecurePort=0, ipcPort=38863, storageInfo=lv=-57;cid=testClusterID;nsid=1770773528;c=1689632111991), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 22:15:14,591 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x40278e870acbf982: Processing first storage report for DS-ca0c701c-932d-4490-b157-14729d33e846 from datanode 3621efb1-40cf-4c42-b131-74ca6a4c5501 2023-07-17 22:15:14,591 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x40278e870acbf982: from storage DS-ca0c701c-932d-4490-b157-14729d33e846 node DatanodeRegistration(127.0.0.1:44355, datanodeUuid=3621efb1-40cf-4c42-b131-74ca6a4c5501, infoPort=43483, infoSecurePort=0, ipcPort=39661, storageInfo=lv=-57;cid=testClusterID;nsid=1770773528;c=1689632111991), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 22:15:14,803 DEBUG [Listener at localhost/37695] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9 2023-07-17 22:15:14,887 INFO [Listener at localhost/37695] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/cluster_2e37384a-437b-4b5a-b559-34afc86ec314/zookeeper_0, clientPort=57139, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/cluster_2e37384a-437b-4b5a-b559-34afc86ec314/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/cluster_2e37384a-437b-4b5a-b559-34afc86ec314/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-17 22:15:14,904 INFO [Listener at localhost/37695] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57139 2023-07-17 22:15:14,914 INFO [Listener at localhost/37695] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:14,916 INFO [Listener at localhost/37695] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:15,628 INFO [Listener at localhost/37695] util.FSUtils(471): Created version file at hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b with version=8 2023-07-17 22:15:15,628 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/hbase-staging 2023-07-17 22:15:15,638 DEBUG [Listener at localhost/37695] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-17 22:15:15,638 DEBUG [Listener at localhost/37695] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-17 22:15:15,638 DEBUG [Listener at localhost/37695] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-17 22:15:15,638 DEBUG [Listener at localhost/37695] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-17 22:15:16,055 INFO [Listener at localhost/37695] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-17 22:15:16,694 INFO [Listener at localhost/37695] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 22:15:16,739 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:16,740 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:16,740 INFO [Listener at localhost/37695] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 22:15:16,741 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:16,741 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 22:15:16,916 INFO [Listener at localhost/37695] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 22:15:17,047 DEBUG [Listener at localhost/37695] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-17 22:15:17,156 INFO [Listener at localhost/37695] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43315 2023-07-17 22:15:17,170 INFO [Listener at localhost/37695] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:17,172 INFO [Listener at localhost/37695] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:17,198 INFO [Listener at localhost/37695] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43315 connecting to ZooKeeper ensemble=127.0.0.1:57139 2023-07-17 22:15:17,256 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:433150x0, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 22:15:17,260 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43315-0x101755a8bb70000 connected 2023-07-17 22:15:17,298 DEBUG [Listener at localhost/37695] zookeeper.ZKUtil(164): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 22:15:17,299 DEBUG [Listener at localhost/37695] zookeeper.ZKUtil(164): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:17,303 DEBUG [Listener at localhost/37695] zookeeper.ZKUtil(164): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 22:15:17,320 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43315 2023-07-17 22:15:17,321 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43315 2023-07-17 22:15:17,321 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43315 2023-07-17 22:15:17,322 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43315 2023-07-17 22:15:17,322 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43315 2023-07-17 22:15:17,369 INFO [Listener at localhost/37695] log.Log(170): Logging initialized @7642ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-17 22:15:17,542 INFO [Listener at localhost/37695] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 22:15:17,543 INFO [Listener at localhost/37695] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 22:15:17,544 INFO [Listener at localhost/37695] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 22:15:17,546 INFO [Listener at localhost/37695] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-17 22:15:17,547 INFO [Listener at localhost/37695] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 22:15:17,547 INFO [Listener at localhost/37695] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 22:15:17,552 INFO [Listener at localhost/37695] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 22:15:17,637 INFO [Listener at localhost/37695] http.HttpServer(1146): Jetty bound to port 33991 2023-07-17 22:15:17,639 INFO [Listener at localhost/37695] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 22:15:17,679 INFO [Listener at localhost/37695] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:17,684 INFO [Listener at localhost/37695] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@32d90619{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/hadoop.log.dir/,AVAILABLE} 2023-07-17 22:15:17,685 INFO [Listener at localhost/37695] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:17,685 INFO [Listener at localhost/37695] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3e41c305{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 22:15:17,760 INFO [Listener at localhost/37695] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 22:15:17,776 INFO [Listener at localhost/37695] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 22:15:17,776 INFO [Listener at localhost/37695] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 22:15:17,779 INFO [Listener at localhost/37695] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-17 22:15:17,790 INFO [Listener at localhost/37695] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:17,822 INFO [Listener at localhost/37695] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@27ce108a{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-17 22:15:17,838 INFO [Listener at localhost/37695] server.AbstractConnector(333): Started ServerConnector@93a89c0{HTTP/1.1, (http/1.1)}{0.0.0.0:33991} 2023-07-17 22:15:17,838 INFO [Listener at localhost/37695] server.Server(415): Started @8112ms 2023-07-17 22:15:17,843 INFO [Listener at localhost/37695] master.HMaster(444): hbase.rootdir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b, hbase.cluster.distributed=false 2023-07-17 22:15:17,932 INFO [Listener at localhost/37695] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 22:15:17,932 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:17,932 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:17,932 INFO [Listener at localhost/37695] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 22:15:17,933 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:17,933 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 22:15:17,940 INFO [Listener at localhost/37695] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 22:15:17,946 INFO [Listener at localhost/37695] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42021 2023-07-17 22:15:17,949 INFO [Listener at localhost/37695] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 22:15:17,956 DEBUG [Listener at localhost/37695] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 22:15:17,957 INFO [Listener at localhost/37695] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:17,959 INFO [Listener at localhost/37695] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:17,961 INFO [Listener at localhost/37695] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42021 connecting to ZooKeeper ensemble=127.0.0.1:57139 2023-07-17 22:15:17,966 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:420210x0, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 22:15:17,971 DEBUG [Listener at localhost/37695] zookeeper.ZKUtil(164): regionserver:420210x0, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 22:15:17,979 DEBUG [Listener at localhost/37695] zookeeper.ZKUtil(164): regionserver:420210x0, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:17,982 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42021-0x101755a8bb70001 connected 2023-07-17 22:15:17,986 DEBUG [Listener at localhost/37695] zookeeper.ZKUtil(164): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 22:15:17,989 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42021 2023-07-17 22:15:17,990 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42021 2023-07-17 22:15:17,993 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42021 2023-07-17 22:15:17,996 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42021 2023-07-17 22:15:17,998 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42021 2023-07-17 22:15:18,002 INFO [Listener at localhost/37695] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 22:15:18,003 INFO [Listener at localhost/37695] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 22:15:18,003 INFO [Listener at localhost/37695] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 22:15:18,005 INFO [Listener at localhost/37695] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 22:15:18,005 INFO [Listener at localhost/37695] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 22:15:18,005 INFO [Listener at localhost/37695] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 22:15:18,005 INFO [Listener at localhost/37695] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 22:15:18,008 INFO [Listener at localhost/37695] http.HttpServer(1146): Jetty bound to port 43677 2023-07-17 22:15:18,008 INFO [Listener at localhost/37695] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 22:15:18,024 INFO [Listener at localhost/37695] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:18,024 INFO [Listener at localhost/37695] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3063b687{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/hadoop.log.dir/,AVAILABLE} 2023-07-17 22:15:18,025 INFO [Listener at localhost/37695] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:18,025 INFO [Listener at localhost/37695] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2d793d90{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 22:15:18,037 INFO [Listener at localhost/37695] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 22:15:18,038 INFO [Listener at localhost/37695] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 22:15:18,039 INFO [Listener at localhost/37695] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 22:15:18,039 INFO [Listener at localhost/37695] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-17 22:15:18,040 INFO [Listener at localhost/37695] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:18,046 INFO [Listener at localhost/37695] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3900982e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:18,047 INFO [Listener at localhost/37695] server.AbstractConnector(333): Started ServerConnector@3616823c{HTTP/1.1, (http/1.1)}{0.0.0.0:43677} 2023-07-17 22:15:18,047 INFO [Listener at localhost/37695] server.Server(415): Started @8321ms 2023-07-17 22:15:18,065 INFO [Listener at localhost/37695] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 22:15:18,065 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:18,065 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:18,066 INFO [Listener at localhost/37695] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 22:15:18,066 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:18,066 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 22:15:18,067 INFO [Listener at localhost/37695] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 22:15:18,069 INFO [Listener at localhost/37695] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34647 2023-07-17 22:15:18,070 INFO [Listener at localhost/37695] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 22:15:18,071 DEBUG [Listener at localhost/37695] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 22:15:18,072 INFO [Listener at localhost/37695] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:18,074 INFO [Listener at localhost/37695] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:18,076 INFO [Listener at localhost/37695] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34647 connecting to ZooKeeper ensemble=127.0.0.1:57139 2023-07-17 22:15:18,081 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:346470x0, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 22:15:18,083 DEBUG [Listener at localhost/37695] zookeeper.ZKUtil(164): regionserver:346470x0, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 22:15:18,084 DEBUG [Listener at localhost/37695] zookeeper.ZKUtil(164): regionserver:346470x0, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:18,084 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34647-0x101755a8bb70002 connected 2023-07-17 22:15:18,085 DEBUG [Listener at localhost/37695] zookeeper.ZKUtil(164): regionserver:34647-0x101755a8bb70002, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 22:15:18,087 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34647 2023-07-17 22:15:18,087 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34647 2023-07-17 22:15:18,088 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34647 2023-07-17 22:15:18,093 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34647 2023-07-17 22:15:18,094 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34647 2023-07-17 22:15:18,097 INFO [Listener at localhost/37695] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 22:15:18,098 INFO [Listener at localhost/37695] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 22:15:18,098 INFO [Listener at localhost/37695] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 22:15:18,099 INFO [Listener at localhost/37695] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 22:15:18,099 INFO [Listener at localhost/37695] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 22:15:18,099 INFO [Listener at localhost/37695] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 22:15:18,099 INFO [Listener at localhost/37695] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 22:15:18,100 INFO [Listener at localhost/37695] http.HttpServer(1146): Jetty bound to port 34927 2023-07-17 22:15:18,101 INFO [Listener at localhost/37695] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 22:15:18,105 INFO [Listener at localhost/37695] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:18,105 INFO [Listener at localhost/37695] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@33aee7d7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/hadoop.log.dir/,AVAILABLE} 2023-07-17 22:15:18,106 INFO [Listener at localhost/37695] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:18,106 INFO [Listener at localhost/37695] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@532be8b6{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 22:15:18,120 INFO [Listener at localhost/37695] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 22:15:18,121 INFO [Listener at localhost/37695] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 22:15:18,121 INFO [Listener at localhost/37695] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 22:15:18,121 INFO [Listener at localhost/37695] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-17 22:15:18,122 INFO [Listener at localhost/37695] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:18,124 INFO [Listener at localhost/37695] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@18cf84a5{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:18,125 INFO [Listener at localhost/37695] server.AbstractConnector(333): Started ServerConnector@22f55d63{HTTP/1.1, (http/1.1)}{0.0.0.0:34927} 2023-07-17 22:15:18,125 INFO [Listener at localhost/37695] server.Server(415): Started @8398ms 2023-07-17 22:15:18,141 INFO [Listener at localhost/37695] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 22:15:18,142 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:18,142 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:18,142 INFO [Listener at localhost/37695] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 22:15:18,142 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:18,142 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 22:15:18,142 INFO [Listener at localhost/37695] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 22:15:18,147 INFO [Listener at localhost/37695] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41625 2023-07-17 22:15:18,149 INFO [Listener at localhost/37695] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 22:15:18,159 DEBUG [Listener at localhost/37695] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 22:15:18,160 INFO [Listener at localhost/37695] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:18,162 INFO [Listener at localhost/37695] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:18,163 INFO [Listener at localhost/37695] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41625 connecting to ZooKeeper ensemble=127.0.0.1:57139 2023-07-17 22:15:18,168 DEBUG [Listener at localhost/37695] zookeeper.ZKUtil(164): regionserver:416250x0, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 22:15:18,169 DEBUG [Listener at localhost/37695] zookeeper.ZKUtil(164): regionserver:416250x0, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:18,170 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:416250x0, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 22:15:18,171 DEBUG [Listener at localhost/37695] zookeeper.ZKUtil(164): regionserver:416250x0, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 22:15:18,177 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41625-0x101755a8bb70003 connected 2023-07-17 22:15:18,177 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41625 2023-07-17 22:15:18,178 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41625 2023-07-17 22:15:18,178 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41625 2023-07-17 22:15:18,180 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41625 2023-07-17 22:15:18,180 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41625 2023-07-17 22:15:18,183 INFO [Listener at localhost/37695] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 22:15:18,183 INFO [Listener at localhost/37695] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 22:15:18,183 INFO [Listener at localhost/37695] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 22:15:18,183 INFO [Listener at localhost/37695] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 22:15:18,184 INFO [Listener at localhost/37695] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 22:15:18,184 INFO [Listener at localhost/37695] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 22:15:18,184 INFO [Listener at localhost/37695] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 22:15:18,185 INFO [Listener at localhost/37695] http.HttpServer(1146): Jetty bound to port 35673 2023-07-17 22:15:18,185 INFO [Listener at localhost/37695] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 22:15:18,186 INFO [Listener at localhost/37695] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:18,187 INFO [Listener at localhost/37695] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@ac4d373{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/hadoop.log.dir/,AVAILABLE} 2023-07-17 22:15:18,187 INFO [Listener at localhost/37695] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:18,187 INFO [Listener at localhost/37695] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5c1fc007{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 22:15:18,197 INFO [Listener at localhost/37695] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 22:15:18,198 INFO [Listener at localhost/37695] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 22:15:18,198 INFO [Listener at localhost/37695] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 22:15:18,199 INFO [Listener at localhost/37695] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-17 22:15:18,200 INFO [Listener at localhost/37695] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:18,201 INFO [Listener at localhost/37695] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3708da20{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:18,203 INFO [Listener at localhost/37695] server.AbstractConnector(333): Started ServerConnector@56e7872c{HTTP/1.1, (http/1.1)}{0.0.0.0:35673} 2023-07-17 22:15:18,203 INFO [Listener at localhost/37695] server.Server(415): Started @8476ms 2023-07-17 22:15:18,211 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 22:15:18,222 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@ef9369a{HTTP/1.1, (http/1.1)}{0.0.0.0:34305} 2023-07-17 22:15:18,223 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8496ms 2023-07-17 22:15:18,223 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,43315,1689632115843 2023-07-17 22:15:18,237 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-17 22:15:18,239 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,43315,1689632115843 2023-07-17 22:15:18,266 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 22:15:18,266 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 22:15:18,266 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 22:15:18,266 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:34647-0x101755a8bb70002, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 22:15:18,266 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:18,269 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-17 22:15:18,272 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,43315,1689632115843 from backup master directory 2023-07-17 22:15:18,272 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-17 22:15:18,277 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,43315,1689632115843 2023-07-17 22:15:18,277 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-17 22:15:18,278 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 22:15:18,278 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,43315,1689632115843 2023-07-17 22:15:18,283 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-17 22:15:18,285 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-17 22:15:18,407 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/hbase.id with ID: edb3e6a8-bdf6-485c-8606-a46d4ec90872 2023-07-17 22:15:18,454 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:18,476 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:18,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x339e0a8e to 127.0.0.1:57139 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:18,588 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@566a35b0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:18,618 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:18,620 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-17 22:15:18,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-17 22:15:18,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-17 22:15:18,647 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-17 22:15:18,652 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-17 22:15:18,654 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 22:15:18,705 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/MasterData/data/master/store-tmp 2023-07-17 22:15:18,764 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:18,765 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-17 22:15:18,765 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:18,765 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:18,765 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-17 22:15:18,765 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:18,765 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:18,765 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-17 22:15:18,767 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/MasterData/WALs/jenkins-hbase4.apache.org,43315,1689632115843 2023-07-17 22:15:18,802 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43315%2C1689632115843, suffix=, logDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/MasterData/WALs/jenkins-hbase4.apache.org,43315,1689632115843, archiveDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/MasterData/oldWALs, maxLogs=10 2023-07-17 22:15:18,942 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45423,DS-92bde3d4-91ce-4c0f-9241-ef4d25e6ef6e,DISK] 2023-07-17 22:15:18,948 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44577,DS-3d412bac-d6ee-40f8-b24e-fa4cf5d7d6ec,DISK] 2023-07-17 22:15:18,979 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44355,DS-8d3ce80f-cbe8-4b6e-94f2-9c9f4fe1c3b4,DISK] 2023-07-17 22:15:18,998 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-17 22:15:19,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/MasterData/WALs/jenkins-hbase4.apache.org,43315,1689632115843/jenkins-hbase4.apache.org%2C43315%2C1689632115843.1689632118817 2023-07-17 22:15:19,095 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45423,DS-92bde3d4-91ce-4c0f-9241-ef4d25e6ef6e,DISK], DatanodeInfoWithStorage[127.0.0.1:44355,DS-8d3ce80f-cbe8-4b6e-94f2-9c9f4fe1c3b4,DISK], DatanodeInfoWithStorage[127.0.0.1:44577,DS-3d412bac-d6ee-40f8-b24e-fa4cf5d7d6ec,DISK]] 2023-07-17 22:15:19,095 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:19,096 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:19,101 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-17 22:15:19,102 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-17 22:15:19,195 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-17 22:15:19,207 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-17 22:15:19,248 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-17 22:15:19,266 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:19,271 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-17 22:15:19,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-17 22:15:19,295 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-17 22:15:19,314 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:19,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11371487360, jitterRate=0.05905228853225708}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:19,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-17 22:15:19,319 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-17 22:15:19,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-17 22:15:19,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-17 22:15:19,363 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-17 22:15:19,365 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-17 22:15:19,414 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 48 msec 2023-07-17 22:15:19,414 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-17 22:15:19,445 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-17 22:15:19,454 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-17 22:15:19,462 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-17 22:15:19,468 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-17 22:15:19,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-17 22:15:19,479 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:19,480 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-17 22:15:19,481 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-17 22:15:19,501 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-17 22:15:19,508 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:19,508 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:34647-0x101755a8bb70002, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:19,508 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:19,508 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:19,508 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:19,512 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,43315,1689632115843, sessionid=0x101755a8bb70000, setting cluster-up flag (Was=false) 2023-07-17 22:15:19,532 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:19,538 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-17 22:15:19,539 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43315,1689632115843 2023-07-17 22:15:19,544 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:19,550 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-17 22:15:19,552 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43315,1689632115843 2023-07-17 22:15:19,555 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.hbase-snapshot/.tmp 2023-07-17 22:15:19,637 INFO [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(951): ClusterId : edb3e6a8-bdf6-485c-8606-a46d4ec90872 2023-07-17 22:15:19,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-17 22:15:19,654 INFO [RS:1;jenkins-hbase4:34647] regionserver.HRegionServer(951): ClusterId : edb3e6a8-bdf6-485c-8606-a46d4ec90872 2023-07-17 22:15:19,663 DEBUG [RS:0;jenkins-hbase4:42021] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 22:15:19,663 DEBUG [RS:1;jenkins-hbase4:34647] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 22:15:19,663 INFO [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer(951): ClusterId : edb3e6a8-bdf6-485c-8606-a46d4ec90872 2023-07-17 22:15:19,665 DEBUG [RS:2;jenkins-hbase4:41625] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 22:15:19,673 DEBUG [RS:1;jenkins-hbase4:34647] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 22:15:19,673 DEBUG [RS:0;jenkins-hbase4:42021] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 22:15:19,674 DEBUG [RS:1;jenkins-hbase4:34647] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 22:15:19,674 DEBUG [RS:0;jenkins-hbase4:42021] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 22:15:19,674 DEBUG [RS:2;jenkins-hbase4:41625] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 22:15:19,674 DEBUG [RS:2;jenkins-hbase4:41625] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 22:15:19,677 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-17 22:15:19,678 DEBUG [RS:1;jenkins-hbase4:34647] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 22:15:19,680 DEBUG [RS:1;jenkins-hbase4:34647] zookeeper.ReadOnlyZKClient(139): Connect 0x1e6a0baf to 127.0.0.1:57139 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:19,680 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43315,1689632115843] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 22:15:19,681 DEBUG [RS:2;jenkins-hbase4:41625] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 22:15:19,681 DEBUG [RS:0;jenkins-hbase4:42021] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 22:15:19,684 DEBUG [RS:2;jenkins-hbase4:41625] zookeeper.ReadOnlyZKClient(139): Connect 0x301cf1f0 to 127.0.0.1:57139 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:19,685 DEBUG [RS:0;jenkins-hbase4:42021] zookeeper.ReadOnlyZKClient(139): Connect 0x3218058a to 127.0.0.1:57139 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:19,686 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-17 22:15:19,686 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-17 22:15:19,704 DEBUG [RS:0;jenkins-hbase4:42021] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3cdffdb9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:19,705 DEBUG [RS:1;jenkins-hbase4:34647] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@10ade612, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:19,705 DEBUG [RS:0;jenkins-hbase4:42021] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@38e06efa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 22:15:19,705 DEBUG [RS:1;jenkins-hbase4:34647] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@149260df, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 22:15:19,707 DEBUG [RS:2;jenkins-hbase4:41625] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3edb1983, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:19,707 DEBUG [RS:2;jenkins-hbase4:41625] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7fe613b8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 22:15:19,736 DEBUG [RS:2;jenkins-hbase4:41625] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:41625 2023-07-17 22:15:19,737 DEBUG [RS:1;jenkins-hbase4:34647] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:34647 2023-07-17 22:15:19,746 INFO [RS:1;jenkins-hbase4:34647] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 22:15:19,747 INFO [RS:2;jenkins-hbase4:41625] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 22:15:19,748 INFO [RS:2;jenkins-hbase4:41625] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 22:15:19,747 INFO [RS:1;jenkins-hbase4:34647] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 22:15:19,748 DEBUG [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 22:15:19,748 DEBUG [RS:1;jenkins-hbase4:34647] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 22:15:19,751 INFO [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43315,1689632115843 with isa=jenkins-hbase4.apache.org/172.31.14.131:41625, startcode=1689632118141 2023-07-17 22:15:19,751 INFO [RS:1;jenkins-hbase4:34647] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43315,1689632115843 with isa=jenkins-hbase4.apache.org/172.31.14.131:34647, startcode=1689632118064 2023-07-17 22:15:19,754 DEBUG [RS:0;jenkins-hbase4:42021] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:42021 2023-07-17 22:15:19,755 INFO [RS:0;jenkins-hbase4:42021] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 22:15:19,755 INFO [RS:0;jenkins-hbase4:42021] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 22:15:19,755 DEBUG [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 22:15:19,756 INFO [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43315,1689632115843 with isa=jenkins-hbase4.apache.org/172.31.14.131:42021, startcode=1689632117931 2023-07-17 22:15:19,778 DEBUG [RS:0;jenkins-hbase4:42021] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 22:15:19,778 DEBUG [RS:1;jenkins-hbase4:34647] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 22:15:19,778 DEBUG [RS:2;jenkins-hbase4:41625] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 22:15:19,816 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-17 22:15:19,887 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55117, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 22:15:19,887 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44127, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 22:15:19,887 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56843, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 22:15:19,891 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-17 22:15:19,900 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-17 22:15:19,900 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:19,901 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-17 22:15:19,902 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-17 22:15:19,904 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 22:15:19,904 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 22:15:19,904 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 22:15:19,904 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 22:15:19,904 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-17 22:15:19,904 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:19,904 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 22:15:19,904 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:19,910 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689632149910 2023-07-17 22:15:19,915 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-17 22:15:19,917 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:19,924 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-17 22:15:19,924 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-17 22:15:19,925 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-17 22:15:19,926 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:19,933 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:19,985 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-17 22:15:19,985 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-17 22:15:19,986 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-17 22:15:19,986 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-17 22:15:19,997 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,002 DEBUG [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer(2830): Master is not running yet 2023-07-17 22:15:20,002 DEBUG [RS:1;jenkins-hbase4:34647] regionserver.HRegionServer(2830): Master is not running yet 2023-07-17 22:15:20,004 WARN [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-17 22:15:20,002 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-17 22:15:20,002 DEBUG [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(2830): Master is not running yet 2023-07-17 22:15:20,005 WARN [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-17 22:15:20,004 WARN [RS:1;jenkins-hbase4:34647] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-17 22:15:20,007 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-17 22:15:20,008 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-17 22:15:20,013 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-17 22:15:20,013 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-17 22:15:20,015 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689632120015,5,FailOnTimeoutGroup] 2023-07-17 22:15:20,018 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689632120015,5,FailOnTimeoutGroup] 2023-07-17 22:15:20,018 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,018 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-17 22:15:20,020 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,020 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,086 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:20,087 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:20,088 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b 2023-07-17 22:15:20,106 INFO [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43315,1689632115843 with isa=jenkins-hbase4.apache.org/172.31.14.131:41625, startcode=1689632118141 2023-07-17 22:15:20,106 INFO [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43315,1689632115843 with isa=jenkins-hbase4.apache.org/172.31.14.131:42021, startcode=1689632117931 2023-07-17 22:15:20,106 INFO [RS:1;jenkins-hbase4:34647] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43315,1689632115843 with isa=jenkins-hbase4.apache.org/172.31.14.131:34647, startcode=1689632118064 2023-07-17 22:15:20,128 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43315] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:20,129 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43315,1689632115843] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 22:15:20,136 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43315,1689632115843] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-17 22:15:20,139 DEBUG [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b 2023-07-17 22:15:20,139 DEBUG [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38457 2023-07-17 22:15:20,139 DEBUG [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33991 2023-07-17 22:15:20,139 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43315] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:20,139 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43315,1689632115843] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 22:15:20,140 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43315,1689632115843] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-17 22:15:20,143 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43315] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:20,144 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43315,1689632115843] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 22:15:20,144 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43315,1689632115843] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-17 22:15:20,149 DEBUG [RS:1;jenkins-hbase4:34647] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b 2023-07-17 22:15:20,149 DEBUG [RS:1;jenkins-hbase4:34647] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38457 2023-07-17 22:15:20,150 DEBUG [RS:1;jenkins-hbase4:34647] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33991 2023-07-17 22:15:20,153 DEBUG [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b 2023-07-17 22:15:20,155 DEBUG [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38457 2023-07-17 22:15:20,155 DEBUG [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33991 2023-07-17 22:15:20,161 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:20,168 DEBUG [RS:0;jenkins-hbase4:42021] zookeeper.ZKUtil(162): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:20,168 WARN [RS:0;jenkins-hbase4:42021] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 22:15:20,169 INFO [RS:0;jenkins-hbase4:42021] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 22:15:20,170 DEBUG [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/WALs/jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:20,171 DEBUG [RS:2;jenkins-hbase4:41625] zookeeper.ZKUtil(162): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:20,171 WARN [RS:2;jenkins-hbase4:41625] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 22:15:20,170 DEBUG [RS:1;jenkins-hbase4:34647] zookeeper.ZKUtil(162): regionserver:34647-0x101755a8bb70002, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:20,171 WARN [RS:1;jenkins-hbase4:34647] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 22:15:20,173 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:20,174 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42021,1689632117931] 2023-07-17 22:15:20,174 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41625,1689632118141] 2023-07-17 22:15:20,174 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34647,1689632118064] 2023-07-17 22:15:20,171 INFO [RS:2;jenkins-hbase4:41625] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 22:15:20,172 INFO [RS:1;jenkins-hbase4:34647] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 22:15:20,183 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-17 22:15:20,183 DEBUG [RS:1;jenkins-hbase4:34647] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/WALs/jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:20,184 DEBUG [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/WALs/jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:20,198 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/info 2023-07-17 22:15:20,199 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-17 22:15:20,200 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:20,200 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-17 22:15:20,203 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/rep_barrier 2023-07-17 22:15:20,203 DEBUG [RS:1;jenkins-hbase4:34647] zookeeper.ZKUtil(162): regionserver:34647-0x101755a8bb70002, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:20,207 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-17 22:15:20,219 DEBUG [RS:1;jenkins-hbase4:34647] zookeeper.ZKUtil(162): regionserver:34647-0x101755a8bb70002, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:20,220 DEBUG [RS:2;jenkins-hbase4:41625] zookeeper.ZKUtil(162): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:20,220 DEBUG [RS:1;jenkins-hbase4:34647] zookeeper.ZKUtil(162): regionserver:34647-0x101755a8bb70002, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:20,224 DEBUG [RS:2;jenkins-hbase4:41625] zookeeper.ZKUtil(162): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:20,220 DEBUG [RS:0;jenkins-hbase4:42021] zookeeper.ZKUtil(162): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:20,225 DEBUG [RS:2;jenkins-hbase4:41625] zookeeper.ZKUtil(162): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:20,225 DEBUG [RS:0;jenkins-hbase4:42021] zookeeper.ZKUtil(162): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:20,226 DEBUG [RS:0;jenkins-hbase4:42021] zookeeper.ZKUtil(162): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:20,235 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:20,235 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-17 22:15:20,245 DEBUG [RS:0;jenkins-hbase4:42021] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 22:15:20,251 DEBUG [RS:2;jenkins-hbase4:41625] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 22:15:20,251 DEBUG [RS:1;jenkins-hbase4:34647] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 22:15:20,256 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/table 2023-07-17 22:15:20,257 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-17 22:15:20,260 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:20,261 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740 2023-07-17 22:15:20,264 INFO [RS:0;jenkins-hbase4:42021] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 22:15:20,265 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740 2023-07-17 22:15:20,264 INFO [RS:1;jenkins-hbase4:34647] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 22:15:20,265 INFO [RS:2;jenkins-hbase4:41625] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 22:15:20,270 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-17 22:15:20,272 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-17 22:15:20,291 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:20,293 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11752013280, jitterRate=0.09449152648448944}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-17 22:15:20,293 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-17 22:15:20,294 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-17 22:15:20,294 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-17 22:15:20,294 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-17 22:15:20,294 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-17 22:15:20,294 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-17 22:15:20,307 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-17 22:15:20,307 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-17 22:15:20,313 INFO [RS:2;jenkins-hbase4:41625] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 22:15:20,315 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-17 22:15:20,315 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-17 22:15:20,316 INFO [RS:1;jenkins-hbase4:34647] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 22:15:20,319 INFO [RS:0;jenkins-hbase4:42021] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 22:15:20,336 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-17 22:15:20,361 INFO [RS:1;jenkins-hbase4:34647] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 22:15:20,370 INFO [RS:1;jenkins-hbase4:34647] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,361 INFO [RS:0;jenkins-hbase4:42021] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 22:15:20,361 INFO [RS:2;jenkins-hbase4:41625] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 22:15:20,391 INFO [RS:1;jenkins-hbase4:34647] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 22:15:20,371 INFO [RS:0;jenkins-hbase4:42021] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,371 INFO [RS:2;jenkins-hbase4:41625] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,393 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-17 22:15:20,399 INFO [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 22:15:20,438 INFO [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 22:15:20,446 INFO [RS:2;jenkins-hbase4:41625] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,447 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-17 22:15:20,447 INFO [RS:1;jenkins-hbase4:34647] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,447 INFO [RS:0;jenkins-hbase4:42021] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,447 DEBUG [RS:1;jenkins-hbase4:34647] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,448 DEBUG [RS:0;jenkins-hbase4:42021] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,448 DEBUG [RS:1;jenkins-hbase4:34647] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,448 DEBUG [RS:0;jenkins-hbase4:42021] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,448 DEBUG [RS:1;jenkins-hbase4:34647] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,448 DEBUG [RS:0;jenkins-hbase4:42021] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,448 DEBUG [RS:1;jenkins-hbase4:34647] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,448 DEBUG [RS:0;jenkins-hbase4:42021] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,447 DEBUG [RS:2;jenkins-hbase4:41625] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,448 DEBUG [RS:0;jenkins-hbase4:42021] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,448 DEBUG [RS:2;jenkins-hbase4:41625] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,448 DEBUG [RS:0;jenkins-hbase4:42021] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 22:15:20,448 DEBUG [RS:1;jenkins-hbase4:34647] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,449 DEBUG [RS:0;jenkins-hbase4:42021] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,448 DEBUG [RS:2;jenkins-hbase4:41625] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,449 DEBUG [RS:0;jenkins-hbase4:42021] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,449 DEBUG [RS:1;jenkins-hbase4:34647] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 22:15:20,449 DEBUG [RS:0;jenkins-hbase4:42021] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,449 DEBUG [RS:2;jenkins-hbase4:41625] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,449 DEBUG [RS:0;jenkins-hbase4:42021] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,449 DEBUG [RS:1;jenkins-hbase4:34647] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,449 DEBUG [RS:1;jenkins-hbase4:34647] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,450 DEBUG [RS:1;jenkins-hbase4:34647] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,450 DEBUG [RS:1;jenkins-hbase4:34647] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,449 DEBUG [RS:2;jenkins-hbase4:41625] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,450 DEBUG [RS:2;jenkins-hbase4:41625] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 22:15:20,450 DEBUG [RS:2;jenkins-hbase4:41625] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,450 DEBUG [RS:2;jenkins-hbase4:41625] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,450 DEBUG [RS:2;jenkins-hbase4:41625] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,450 DEBUG [RS:2;jenkins-hbase4:41625] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:20,456 INFO [RS:1;jenkins-hbase4:34647] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,456 INFO [RS:2;jenkins-hbase4:41625] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,456 INFO [RS:1;jenkins-hbase4:34647] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,456 INFO [RS:2;jenkins-hbase4:41625] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,457 INFO [RS:1;jenkins-hbase4:34647] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,457 INFO [RS:2;jenkins-hbase4:41625] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,459 INFO [RS:0;jenkins-hbase4:42021] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,459 INFO [RS:0;jenkins-hbase4:42021] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,459 INFO [RS:0;jenkins-hbase4:42021] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,478 INFO [RS:2;jenkins-hbase4:41625] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 22:15:20,479 INFO [RS:1;jenkins-hbase4:34647] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 22:15:20,483 INFO [RS:0;jenkins-hbase4:42021] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 22:15:20,484 INFO [RS:2;jenkins-hbase4:41625] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41625,1689632118141-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,484 INFO [RS:0;jenkins-hbase4:42021] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42021,1689632117931-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,484 INFO [RS:1;jenkins-hbase4:34647] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34647,1689632118064-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:20,513 INFO [RS:0;jenkins-hbase4:42021] regionserver.Replication(203): jenkins-hbase4.apache.org,42021,1689632117931 started 2023-07-17 22:15:20,513 INFO [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42021,1689632117931, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42021, sessionid=0x101755a8bb70001 2023-07-17 22:15:20,514 INFO [RS:1;jenkins-hbase4:34647] regionserver.Replication(203): jenkins-hbase4.apache.org,34647,1689632118064 started 2023-07-17 22:15:20,515 INFO [RS:2;jenkins-hbase4:41625] regionserver.Replication(203): jenkins-hbase4.apache.org,41625,1689632118141 started 2023-07-17 22:15:20,515 DEBUG [RS:0;jenkins-hbase4:42021] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 22:15:20,515 INFO [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41625,1689632118141, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41625, sessionid=0x101755a8bb70003 2023-07-17 22:15:20,515 INFO [RS:1;jenkins-hbase4:34647] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34647,1689632118064, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34647, sessionid=0x101755a8bb70002 2023-07-17 22:15:20,515 DEBUG [RS:2;jenkins-hbase4:41625] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 22:15:20,515 DEBUG [RS:1;jenkins-hbase4:34647] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 22:15:20,515 DEBUG [RS:2;jenkins-hbase4:41625] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:20,515 DEBUG [RS:0;jenkins-hbase4:42021] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:20,516 DEBUG [RS:2;jenkins-hbase4:41625] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41625,1689632118141' 2023-07-17 22:15:20,516 DEBUG [RS:1;jenkins-hbase4:34647] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:20,517 DEBUG [RS:2;jenkins-hbase4:41625] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 22:15:20,516 DEBUG [RS:0;jenkins-hbase4:42021] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42021,1689632117931' 2023-07-17 22:15:20,517 DEBUG [RS:0;jenkins-hbase4:42021] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 22:15:20,517 DEBUG [RS:1;jenkins-hbase4:34647] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34647,1689632118064' 2023-07-17 22:15:20,517 DEBUG [RS:1;jenkins-hbase4:34647] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 22:15:20,518 DEBUG [RS:2;jenkins-hbase4:41625] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 22:15:20,518 DEBUG [RS:0;jenkins-hbase4:42021] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 22:15:20,518 DEBUG [RS:1;jenkins-hbase4:34647] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 22:15:20,519 DEBUG [RS:2;jenkins-hbase4:41625] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 22:15:20,519 DEBUG [RS:2;jenkins-hbase4:41625] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 22:15:20,519 DEBUG [RS:2;jenkins-hbase4:41625] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:20,519 DEBUG [RS:0;jenkins-hbase4:42021] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 22:15:20,519 DEBUG [RS:2;jenkins-hbase4:41625] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41625,1689632118141' 2023-07-17 22:15:20,519 DEBUG [RS:2;jenkins-hbase4:41625] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 22:15:20,519 DEBUG [RS:1;jenkins-hbase4:34647] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 22:15:20,519 DEBUG [RS:0;jenkins-hbase4:42021] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 22:15:20,519 DEBUG [RS:1;jenkins-hbase4:34647] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 22:15:20,523 DEBUG [RS:1;jenkins-hbase4:34647] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:20,519 DEBUG [RS:0;jenkins-hbase4:42021] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:20,523 DEBUG [RS:0;jenkins-hbase4:42021] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42021,1689632117931' 2023-07-17 22:15:20,523 DEBUG [RS:0;jenkins-hbase4:42021] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 22:15:20,523 DEBUG [RS:1;jenkins-hbase4:34647] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34647,1689632118064' 2023-07-17 22:15:20,523 DEBUG [RS:1;jenkins-hbase4:34647] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 22:15:20,523 DEBUG [RS:2;jenkins-hbase4:41625] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 22:15:20,524 DEBUG [RS:0;jenkins-hbase4:42021] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 22:15:20,524 DEBUG [RS:1;jenkins-hbase4:34647] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 22:15:20,524 DEBUG [RS:2;jenkins-hbase4:41625] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 22:15:20,524 INFO [RS:2;jenkins-hbase4:41625] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-17 22:15:20,524 INFO [RS:2;jenkins-hbase4:41625] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-17 22:15:20,524 DEBUG [RS:1;jenkins-hbase4:34647] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 22:15:20,524 DEBUG [RS:0;jenkins-hbase4:42021] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 22:15:20,525 INFO [RS:1;jenkins-hbase4:34647] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-17 22:15:20,525 INFO [RS:0;jenkins-hbase4:42021] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-17 22:15:20,526 INFO [RS:0;jenkins-hbase4:42021] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-17 22:15:20,525 INFO [RS:1;jenkins-hbase4:34647] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-17 22:15:20,599 DEBUG [jenkins-hbase4:43315] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-17 22:15:20,627 DEBUG [jenkins-hbase4:43315] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:20,629 DEBUG [jenkins-hbase4:43315] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:20,629 DEBUG [jenkins-hbase4:43315] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:20,629 DEBUG [jenkins-hbase4:43315] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:20,629 DEBUG [jenkins-hbase4:43315] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:20,635 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41625,1689632118141, state=OPENING 2023-07-17 22:15:20,643 INFO [RS:1;jenkins-hbase4:34647] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34647%2C1689632118064, suffix=, logDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/WALs/jenkins-hbase4.apache.org,34647,1689632118064, archiveDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/oldWALs, maxLogs=32 2023-07-17 22:15:20,643 INFO [RS:0;jenkins-hbase4:42021] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42021%2C1689632117931, suffix=, logDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/WALs/jenkins-hbase4.apache.org,42021,1689632117931, archiveDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/oldWALs, maxLogs=32 2023-07-17 22:15:20,651 INFO [RS:2;jenkins-hbase4:41625] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41625%2C1689632118141, suffix=, logDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/WALs/jenkins-hbase4.apache.org,41625,1689632118141, archiveDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/oldWALs, maxLogs=32 2023-07-17 22:15:20,652 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-17 22:15:20,653 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:20,654 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-17 22:15:20,659 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:20,832 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44355,DS-8d3ce80f-cbe8-4b6e-94f2-9c9f4fe1c3b4,DISK] 2023-07-17 22:15:20,832 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45423,DS-92bde3d4-91ce-4c0f-9241-ef4d25e6ef6e,DISK] 2023-07-17 22:15:20,832 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44577,DS-3d412bac-d6ee-40f8-b24e-fa4cf5d7d6ec,DISK] 2023-07-17 22:15:20,837 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44577,DS-3d412bac-d6ee-40f8-b24e-fa4cf5d7d6ec,DISK] 2023-07-17 22:15:20,838 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44355,DS-8d3ce80f-cbe8-4b6e-94f2-9c9f4fe1c3b4,DISK] 2023-07-17 22:15:20,838 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45423,DS-92bde3d4-91ce-4c0f-9241-ef4d25e6ef6e,DISK] 2023-07-17 22:15:20,840 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44355,DS-8d3ce80f-cbe8-4b6e-94f2-9c9f4fe1c3b4,DISK] 2023-07-17 22:15:20,840 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44577,DS-3d412bac-d6ee-40f8-b24e-fa4cf5d7d6ec,DISK] 2023-07-17 22:15:20,841 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45423,DS-92bde3d4-91ce-4c0f-9241-ef4d25e6ef6e,DISK] 2023-07-17 22:15:20,849 INFO [RS:2;jenkins-hbase4:41625] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/WALs/jenkins-hbase4.apache.org,41625,1689632118141/jenkins-hbase4.apache.org%2C41625%2C1689632118141.1689632120692 2023-07-17 22:15:20,850 DEBUG [RS:2;jenkins-hbase4:41625] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44577,DS-3d412bac-d6ee-40f8-b24e-fa4cf5d7d6ec,DISK], DatanodeInfoWithStorage[127.0.0.1:45423,DS-92bde3d4-91ce-4c0f-9241-ef4d25e6ef6e,DISK], DatanodeInfoWithStorage[127.0.0.1:44355,DS-8d3ce80f-cbe8-4b6e-94f2-9c9f4fe1c3b4,DISK]] 2023-07-17 22:15:20,860 INFO [RS:0;jenkins-hbase4:42021] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/WALs/jenkins-hbase4.apache.org,42021,1689632117931/jenkins-hbase4.apache.org%2C42021%2C1689632117931.1689632120672 2023-07-17 22:15:20,860 INFO [RS:1;jenkins-hbase4:34647] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/WALs/jenkins-hbase4.apache.org,34647,1689632118064/jenkins-hbase4.apache.org%2C34647%2C1689632118064.1689632120659 2023-07-17 22:15:20,862 DEBUG [RS:0;jenkins-hbase4:42021] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44355,DS-8d3ce80f-cbe8-4b6e-94f2-9c9f4fe1c3b4,DISK], DatanodeInfoWithStorage[127.0.0.1:45423,DS-92bde3d4-91ce-4c0f-9241-ef4d25e6ef6e,DISK], DatanodeInfoWithStorage[127.0.0.1:44577,DS-3d412bac-d6ee-40f8-b24e-fa4cf5d7d6ec,DISK]] 2023-07-17 22:15:20,863 DEBUG [RS:1;jenkins-hbase4:34647] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45423,DS-92bde3d4-91ce-4c0f-9241-ef4d25e6ef6e,DISK], DatanodeInfoWithStorage[127.0.0.1:44577,DS-3d412bac-d6ee-40f8-b24e-fa4cf5d7d6ec,DISK], DatanodeInfoWithStorage[127.0.0.1:44355,DS-8d3ce80f-cbe8-4b6e-94f2-9c9f4fe1c3b4,DISK]] 2023-07-17 22:15:21,017 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:21,021 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 22:15:21,024 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57004, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 22:15:21,035 WARN [ReadOnlyZKClient-127.0.0.1:57139@0x339e0a8e] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-17 22:15:21,040 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-17 22:15:21,040 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 22:15:21,044 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41625%2C1689632118141.meta, suffix=.meta, logDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/WALs/jenkins-hbase4.apache.org,41625,1689632118141, archiveDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/oldWALs, maxLogs=32 2023-07-17 22:15:21,075 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44577,DS-3d412bac-d6ee-40f8-b24e-fa4cf5d7d6ec,DISK] 2023-07-17 22:15:21,082 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43315,1689632115843] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 22:15:21,082 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45423,DS-92bde3d4-91ce-4c0f-9241-ef4d25e6ef6e,DISK] 2023-07-17 22:15:21,083 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44355,DS-8d3ce80f-cbe8-4b6e-94f2-9c9f4fe1c3b4,DISK] 2023-07-17 22:15:21,093 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57014, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 22:15:21,096 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41625] ipc.CallRunner(144): callId: 1 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:57014 deadline: 1689632181095, exception=org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region hbase:meta,,1 is opening on jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:21,099 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/WALs/jenkins-hbase4.apache.org,41625,1689632118141/jenkins-hbase4.apache.org%2C41625%2C1689632118141.meta.1689632121046.meta 2023-07-17 22:15:21,100 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44577,DS-3d412bac-d6ee-40f8-b24e-fa4cf5d7d6ec,DISK], DatanodeInfoWithStorage[127.0.0.1:45423,DS-92bde3d4-91ce-4c0f-9241-ef4d25e6ef6e,DISK], DatanodeInfoWithStorage[127.0.0.1:44355,DS-8d3ce80f-cbe8-4b6e-94f2-9c9f4fe1c3b4,DISK]] 2023-07-17 22:15:21,100 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:21,102 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-17 22:15:21,113 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-17 22:15:21,115 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-17 22:15:21,121 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-17 22:15:21,121 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:21,121 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-17 22:15:21,121 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-17 22:15:21,125 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-17 22:15:21,129 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/info 2023-07-17 22:15:21,129 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/info 2023-07-17 22:15:21,129 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-17 22:15:21,130 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:21,130 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-17 22:15:21,132 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/rep_barrier 2023-07-17 22:15:21,132 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/rep_barrier 2023-07-17 22:15:21,132 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-17 22:15:21,133 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:21,133 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-17 22:15:21,135 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/table 2023-07-17 22:15:21,135 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/table 2023-07-17 22:15:21,136 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-17 22:15:21,137 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:21,138 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740 2023-07-17 22:15:21,141 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740 2023-07-17 22:15:21,145 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-17 22:15:21,148 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-17 22:15:21,150 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9660032480, jitterRate=-0.1003393679857254}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-17 22:15:21,150 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-17 22:15:21,163 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689632121005 2023-07-17 22:15:21,186 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41625,1689632118141, state=OPEN 2023-07-17 22:15:21,188 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-17 22:15:21,189 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-17 22:15:21,192 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-17 22:15:21,192 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-17 22:15:21,198 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-17 22:15:21,198 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41625,1689632118141 in 533 msec 2023-07-17 22:15:21,208 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-17 22:15:21,209 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 865 msec 2023-07-17 22:15:21,217 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.5190 sec 2023-07-17 22:15:21,217 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689632121217, completionTime=-1 2023-07-17 22:15:21,217 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-17 22:15:21,218 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-17 22:15:21,274 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-17 22:15:21,275 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689632181275 2023-07-17 22:15:21,275 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689632241275 2023-07-17 22:15:21,275 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 56 msec 2023-07-17 22:15:21,290 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43315,1689632115843-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:21,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43315,1689632115843-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:21,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43315,1689632115843-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:21,293 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:43315, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:21,293 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:21,300 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-17 22:15:21,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-17 22:15:21,316 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:21,327 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-17 22:15:21,330 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 22:15:21,332 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 22:15:21,347 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/hbase/namespace/fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:21,350 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/hbase/namespace/fdcdbf251438e26cb4d3816e7324408a empty. 2023-07-17 22:15:21,351 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/hbase/namespace/fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:21,351 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-17 22:15:21,400 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:21,403 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => fdcdbf251438e26cb4d3816e7324408a, NAME => 'hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:21,423 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:21,424 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing fdcdbf251438e26cb4d3816e7324408a, disabling compactions & flushes 2023-07-17 22:15:21,424 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. 2023-07-17 22:15:21,424 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. 2023-07-17 22:15:21,424 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. after waiting 0 ms 2023-07-17 22:15:21,424 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. 2023-07-17 22:15:21,424 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. 2023-07-17 22:15:21,424 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for fdcdbf251438e26cb4d3816e7324408a: 2023-07-17 22:15:21,429 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 22:15:21,446 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689632121432"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632121432"}]},"ts":"1689632121432"} 2023-07-17 22:15:21,482 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 22:15:21,484 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 22:15:21,489 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632121484"}]},"ts":"1689632121484"} 2023-07-17 22:15:21,493 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-17 22:15:21,499 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:21,499 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:21,499 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:21,499 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:21,499 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:21,501 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=fdcdbf251438e26cb4d3816e7324408a, ASSIGN}] 2023-07-17 22:15:21,506 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=fdcdbf251438e26cb4d3816e7324408a, ASSIGN 2023-07-17 22:15:21,508 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=fdcdbf251438e26cb4d3816e7324408a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41625,1689632118141; forceNewPlan=false, retain=false 2023-07-17 22:15:21,659 INFO [jenkins-hbase4:43315] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 22:15:21,661 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=fdcdbf251438e26cb4d3816e7324408a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:21,661 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689632121661"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632121661"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632121661"}]},"ts":"1689632121661"} 2023-07-17 22:15:21,670 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure fdcdbf251438e26cb4d3816e7324408a, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:21,829 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. 2023-07-17 22:15:21,830 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fdcdbf251438e26cb4d3816e7324408a, NAME => 'hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:21,831 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:21,831 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:21,831 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:21,831 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:21,834 INFO [StoreOpener-fdcdbf251438e26cb4d3816e7324408a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:21,836 DEBUG [StoreOpener-fdcdbf251438e26cb4d3816e7324408a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/namespace/fdcdbf251438e26cb4d3816e7324408a/info 2023-07-17 22:15:21,836 DEBUG [StoreOpener-fdcdbf251438e26cb4d3816e7324408a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/namespace/fdcdbf251438e26cb4d3816e7324408a/info 2023-07-17 22:15:21,836 INFO [StoreOpener-fdcdbf251438e26cb4d3816e7324408a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fdcdbf251438e26cb4d3816e7324408a columnFamilyName info 2023-07-17 22:15:21,837 INFO [StoreOpener-fdcdbf251438e26cb4d3816e7324408a-1] regionserver.HStore(310): Store=fdcdbf251438e26cb4d3816e7324408a/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:21,838 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/namespace/fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:21,839 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/namespace/fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:21,843 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:21,846 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/namespace/fdcdbf251438e26cb4d3816e7324408a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:21,847 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fdcdbf251438e26cb4d3816e7324408a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11480459200, jitterRate=0.06920108199119568}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:21,847 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fdcdbf251438e26cb4d3816e7324408a: 2023-07-17 22:15:21,849 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a., pid=6, masterSystemTime=1689632121823 2023-07-17 22:15:21,852 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. 2023-07-17 22:15:21,852 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. 2023-07-17 22:15:21,854 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=fdcdbf251438e26cb4d3816e7324408a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:21,854 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689632121853"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632121853"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632121853"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632121853"}]},"ts":"1689632121853"} 2023-07-17 22:15:21,861 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-17 22:15:21,861 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure fdcdbf251438e26cb4d3816e7324408a, server=jenkins-hbase4.apache.org,41625,1689632118141 in 187 msec 2023-07-17 22:15:21,864 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-17 22:15:21,865 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=fdcdbf251438e26cb4d3816e7324408a, ASSIGN in 360 msec 2023-07-17 22:15:21,866 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 22:15:21,867 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632121866"}]},"ts":"1689632121866"} 2023-07-17 22:15:21,869 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-17 22:15:21,872 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 22:15:21,875 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 555 msec 2023-07-17 22:15:21,930 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-17 22:15:21,932 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-17 22:15:21,932 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:21,989 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-17 22:15:22,006 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 22:15:22,013 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 36 msec 2023-07-17 22:15:22,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-17 22:15:22,033 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 22:15:22,038 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 15 msec 2023-07-17 22:15:22,049 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-17 22:15:22,051 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-17 22:15:22,052 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.772sec 2023-07-17 22:15:22,054 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-17 22:15:22,055 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-17 22:15:22,055 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-17 22:15:22,057 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43315,1689632115843-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-17 22:15:22,057 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43315,1689632115843-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-17 22:15:22,068 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-17 22:15:22,091 DEBUG [Listener at localhost/37695] zookeeper.ReadOnlyZKClient(139): Connect 0x7f52b832 to 127.0.0.1:57139 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:22,097 DEBUG [Listener at localhost/37695] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5eb21af6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:22,114 DEBUG [hconnection-0x5b2f1cbe-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 22:15:22,118 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43315,1689632115843] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:22,121 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43315,1689632115843] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-17 22:15:22,123 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 22:15:22,125 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 22:15:22,128 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0 2023-07-17 22:15:22,129 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0 empty. 2023-07-17 22:15:22,132 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0 2023-07-17 22:15:22,132 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-17 22:15:22,132 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57016, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 22:15:22,143 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,43315,1689632115843 2023-07-17 22:15:22,144 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:22,156 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:22,157 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 50dfbd4291683110d06a43487ab94cb0, NAME => 'hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:22,179 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:22,179 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 50dfbd4291683110d06a43487ab94cb0, disabling compactions & flushes 2023-07-17 22:15:22,179 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. 2023-07-17 22:15:22,179 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. 2023-07-17 22:15:22,179 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. after waiting 0 ms 2023-07-17 22:15:22,179 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. 2023-07-17 22:15:22,179 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. 2023-07-17 22:15:22,179 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 50dfbd4291683110d06a43487ab94cb0: 2023-07-17 22:15:22,183 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 22:15:22,185 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689632122184"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632122184"}]},"ts":"1689632122184"} 2023-07-17 22:15:22,187 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 22:15:22,189 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 22:15:22,189 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632122189"}]},"ts":"1689632122189"} 2023-07-17 22:15:22,191 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-17 22:15:22,196 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:22,196 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:22,196 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:22,196 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:22,196 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:22,196 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=50dfbd4291683110d06a43487ab94cb0, ASSIGN}] 2023-07-17 22:15:22,198 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=50dfbd4291683110d06a43487ab94cb0, ASSIGN 2023-07-17 22:15:22,200 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=50dfbd4291683110d06a43487ab94cb0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41625,1689632118141; forceNewPlan=false, retain=false 2023-07-17 22:15:22,350 INFO [jenkins-hbase4:43315] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 22:15:22,351 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=50dfbd4291683110d06a43487ab94cb0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:22,352 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689632122351"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632122351"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632122351"}]},"ts":"1689632122351"} 2023-07-17 22:15:22,355 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 50dfbd4291683110d06a43487ab94cb0, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:22,513 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. 2023-07-17 22:15:22,513 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 50dfbd4291683110d06a43487ab94cb0, NAME => 'hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:22,514 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-17 22:15:22,514 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. service=MultiRowMutationService 2023-07-17 22:15:22,515 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-17 22:15:22,515 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 50dfbd4291683110d06a43487ab94cb0 2023-07-17 22:15:22,515 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:22,515 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 50dfbd4291683110d06a43487ab94cb0 2023-07-17 22:15:22,515 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 50dfbd4291683110d06a43487ab94cb0 2023-07-17 22:15:22,517 INFO [StoreOpener-50dfbd4291683110d06a43487ab94cb0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 50dfbd4291683110d06a43487ab94cb0 2023-07-17 22:15:22,519 DEBUG [StoreOpener-50dfbd4291683110d06a43487ab94cb0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0/m 2023-07-17 22:15:22,519 DEBUG [StoreOpener-50dfbd4291683110d06a43487ab94cb0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0/m 2023-07-17 22:15:22,520 INFO [StoreOpener-50dfbd4291683110d06a43487ab94cb0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 50dfbd4291683110d06a43487ab94cb0 columnFamilyName m 2023-07-17 22:15:22,521 INFO [StoreOpener-50dfbd4291683110d06a43487ab94cb0-1] regionserver.HStore(310): Store=50dfbd4291683110d06a43487ab94cb0/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:22,522 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0 2023-07-17 22:15:22,523 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0 2023-07-17 22:15:22,527 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 50dfbd4291683110d06a43487ab94cb0 2023-07-17 22:15:22,531 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:22,532 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 50dfbd4291683110d06a43487ab94cb0; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@733bb535, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:22,532 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 50dfbd4291683110d06a43487ab94cb0: 2023-07-17 22:15:22,533 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0., pid=11, masterSystemTime=1689632122508 2023-07-17 22:15:22,536 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. 2023-07-17 22:15:22,536 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. 2023-07-17 22:15:22,537 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=50dfbd4291683110d06a43487ab94cb0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:22,537 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689632122537"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632122537"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632122537"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632122537"}]},"ts":"1689632122537"} 2023-07-17 22:15:22,546 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-07-17 22:15:22,546 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 50dfbd4291683110d06a43487ab94cb0, server=jenkins-hbase4.apache.org,41625,1689632118141 in 187 msec 2023-07-17 22:15:22,561 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-07-17 22:15:22,562 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=50dfbd4291683110d06a43487ab94cb0, ASSIGN in 350 msec 2023-07-17 22:15:22,564 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 22:15:22,564 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632122564"}]},"ts":"1689632122564"} 2023-07-17 22:15:22,567 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-17 22:15:22,570 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 22:15:22,573 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 452 msec 2023-07-17 22:15:22,627 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43315,1689632115843] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-17 22:15:22,627 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43315,1689632115843] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-17 22:15:22,655 DEBUG [Listener at localhost/37695] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-17 22:15:22,659 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58158, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-17 22:15:22,674 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-17 22:15:22,674 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:22,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-17 22:15:22,680 DEBUG [Listener at localhost/37695] zookeeper.ReadOnlyZKClient(139): Connect 0x4cec13a7 to 127.0.0.1:57139 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:22,686 DEBUG [Listener at localhost/37695] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@394eed7c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:22,686 INFO [Listener at localhost/37695] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:57139 2023-07-17 22:15:22,692 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 22:15:22,693 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101755a8bb7000a connected 2023-07-17 22:15:22,709 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:22,711 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43315,1689632115843] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:22,721 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43315,1689632115843] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-17 22:15:22,728 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43315,1689632115843] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-17 22:15:22,735 INFO [Listener at localhost/37695] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=419, OpenFileDescriptor=673, MaxFileDescriptor=60000, SystemLoadAverage=372, ProcessCount=174, AvailableMemoryMB=3666 2023-07-17 22:15:22,738 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-17 22:15:22,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:22,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:22,810 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-17 22:15:22,826 INFO [Listener at localhost/37695] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 22:15:22,826 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:22,827 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:22,827 INFO [Listener at localhost/37695] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 22:15:22,827 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:22,827 INFO [Listener at localhost/37695] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 22:15:22,827 INFO [Listener at localhost/37695] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 22:15:22,831 INFO [Listener at localhost/37695] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34803 2023-07-17 22:15:22,832 INFO [Listener at localhost/37695] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 22:15:22,833 DEBUG [Listener at localhost/37695] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 22:15:22,834 INFO [Listener at localhost/37695] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:22,835 INFO [Listener at localhost/37695] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:22,837 INFO [Listener at localhost/37695] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34803 connecting to ZooKeeper ensemble=127.0.0.1:57139 2023-07-17 22:15:22,840 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:348030x0, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 22:15:22,841 DEBUG [Listener at localhost/37695] zookeeper.ZKUtil(162): regionserver:348030x0, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-17 22:15:22,842 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34803-0x101755a8bb7000b connected 2023-07-17 22:15:22,842 DEBUG [Listener at localhost/37695] zookeeper.ZKUtil(162): regionserver:34803-0x101755a8bb7000b, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-17 22:15:22,846 DEBUG [Listener at localhost/37695] zookeeper.ZKUtil(164): regionserver:34803-0x101755a8bb7000b, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 22:15:22,847 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34803 2023-07-17 22:15:22,849 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34803 2023-07-17 22:15:22,850 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34803 2023-07-17 22:15:22,853 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34803 2023-07-17 22:15:22,853 DEBUG [Listener at localhost/37695] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34803 2023-07-17 22:15:22,855 INFO [Listener at localhost/37695] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 22:15:22,856 INFO [Listener at localhost/37695] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 22:15:22,856 INFO [Listener at localhost/37695] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 22:15:22,857 INFO [Listener at localhost/37695] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 22:15:22,857 INFO [Listener at localhost/37695] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 22:15:22,857 INFO [Listener at localhost/37695] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 22:15:22,857 INFO [Listener at localhost/37695] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 22:15:22,858 INFO [Listener at localhost/37695] http.HttpServer(1146): Jetty bound to port 34185 2023-07-17 22:15:22,858 INFO [Listener at localhost/37695] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 22:15:22,862 INFO [Listener at localhost/37695] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:22,862 INFO [Listener at localhost/37695] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@623affc3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/hadoop.log.dir/,AVAILABLE} 2023-07-17 22:15:22,863 INFO [Listener at localhost/37695] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:22,863 INFO [Listener at localhost/37695] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1755bd06{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 22:15:22,874 INFO [Listener at localhost/37695] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 22:15:22,876 INFO [Listener at localhost/37695] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 22:15:22,876 INFO [Listener at localhost/37695] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 22:15:22,876 INFO [Listener at localhost/37695] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-17 22:15:22,879 INFO [Listener at localhost/37695] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:22,880 INFO [Listener at localhost/37695] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@78c1ca58{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:22,890 INFO [Listener at localhost/37695] server.AbstractConnector(333): Started ServerConnector@77e30ea5{HTTP/1.1, (http/1.1)}{0.0.0.0:34185} 2023-07-17 22:15:22,890 INFO [Listener at localhost/37695] server.Server(415): Started @13163ms 2023-07-17 22:15:22,894 INFO [RS:3;jenkins-hbase4:34803] regionserver.HRegionServer(951): ClusterId : edb3e6a8-bdf6-485c-8606-a46d4ec90872 2023-07-17 22:15:22,895 DEBUG [RS:3;jenkins-hbase4:34803] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 22:15:22,899 DEBUG [RS:3;jenkins-hbase4:34803] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 22:15:22,899 DEBUG [RS:3;jenkins-hbase4:34803] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 22:15:22,902 DEBUG [RS:3;jenkins-hbase4:34803] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 22:15:22,904 DEBUG [RS:3;jenkins-hbase4:34803] zookeeper.ReadOnlyZKClient(139): Connect 0x6b9368bf to 127.0.0.1:57139 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:22,921 DEBUG [RS:3;jenkins-hbase4:34803] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@616b03cf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:22,921 DEBUG [RS:3;jenkins-hbase4:34803] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@24699186, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 22:15:22,932 DEBUG [RS:3;jenkins-hbase4:34803] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:34803 2023-07-17 22:15:22,932 INFO [RS:3;jenkins-hbase4:34803] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 22:15:22,932 INFO [RS:3;jenkins-hbase4:34803] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 22:15:22,932 DEBUG [RS:3;jenkins-hbase4:34803] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 22:15:22,933 INFO [RS:3;jenkins-hbase4:34803] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43315,1689632115843 with isa=jenkins-hbase4.apache.org/172.31.14.131:34803, startcode=1689632122825 2023-07-17 22:15:22,933 DEBUG [RS:3;jenkins-hbase4:34803] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 22:15:22,938 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36333, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 22:15:22,940 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43315] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:22,940 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43315,1689632115843] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 22:15:22,941 DEBUG [RS:3;jenkins-hbase4:34803] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b 2023-07-17 22:15:22,941 DEBUG [RS:3;jenkins-hbase4:34803] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38457 2023-07-17 22:15:22,941 DEBUG [RS:3;jenkins-hbase4:34803] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33991 2023-07-17 22:15:22,947 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:22,947 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:22,948 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34803,1689632122825] 2023-07-17 22:15:22,949 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:22,949 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:22,949 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:22,949 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:34647-0x101755a8bb70002, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:22,950 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:22,951 DEBUG [RS:3;jenkins-hbase4:34803] zookeeper.ZKUtil(162): regionserver:34803-0x101755a8bb7000b, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:22,951 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34647-0x101755a8bb70002, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:22,951 WARN [RS:3;jenkins-hbase4:34803] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 22:15:22,951 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:22,951 INFO [RS:3;jenkins-hbase4:34803] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 22:15:22,951 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:22,951 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34647-0x101755a8bb70002, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:22,952 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:22,952 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34647-0x101755a8bb70002, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:22,952 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34647-0x101755a8bb70002, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:22,952 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:22,952 DEBUG [RS:3;jenkins-hbase4:34803] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/WALs/jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:22,953 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43315,1689632115843] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:22,953 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:22,953 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43315,1689632115843] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-17 22:15:22,963 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43315,1689632115843] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-17 22:15:22,967 DEBUG [RS:3;jenkins-hbase4:34803] zookeeper.ZKUtil(162): regionserver:34803-0x101755a8bb7000b, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:22,968 DEBUG [RS:3;jenkins-hbase4:34803] zookeeper.ZKUtil(162): regionserver:34803-0x101755a8bb7000b, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:22,968 DEBUG [RS:3;jenkins-hbase4:34803] zookeeper.ZKUtil(162): regionserver:34803-0x101755a8bb7000b, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:22,969 DEBUG [RS:3;jenkins-hbase4:34803] zookeeper.ZKUtil(162): regionserver:34803-0x101755a8bb7000b, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:22,970 DEBUG [RS:3;jenkins-hbase4:34803] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 22:15:22,970 INFO [RS:3;jenkins-hbase4:34803] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 22:15:22,973 INFO [RS:3;jenkins-hbase4:34803] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 22:15:22,974 INFO [RS:3;jenkins-hbase4:34803] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 22:15:22,974 INFO [RS:3;jenkins-hbase4:34803] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:22,974 INFO [RS:3;jenkins-hbase4:34803] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 22:15:22,976 INFO [RS:3;jenkins-hbase4:34803] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:22,976 DEBUG [RS:3;jenkins-hbase4:34803] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:22,976 DEBUG [RS:3;jenkins-hbase4:34803] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:22,976 DEBUG [RS:3;jenkins-hbase4:34803] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:22,976 DEBUG [RS:3;jenkins-hbase4:34803] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:22,976 DEBUG [RS:3;jenkins-hbase4:34803] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:22,976 DEBUG [RS:3;jenkins-hbase4:34803] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 22:15:22,976 DEBUG [RS:3;jenkins-hbase4:34803] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:22,976 DEBUG [RS:3;jenkins-hbase4:34803] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:22,976 DEBUG [RS:3;jenkins-hbase4:34803] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:22,976 DEBUG [RS:3;jenkins-hbase4:34803] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:22,977 INFO [RS:3;jenkins-hbase4:34803] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:22,977 INFO [RS:3;jenkins-hbase4:34803] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:22,977 INFO [RS:3;jenkins-hbase4:34803] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:22,995 INFO [RS:3;jenkins-hbase4:34803] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 22:15:22,995 INFO [RS:3;jenkins-hbase4:34803] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34803,1689632122825-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:23,012 INFO [RS:3;jenkins-hbase4:34803] regionserver.Replication(203): jenkins-hbase4.apache.org,34803,1689632122825 started 2023-07-17 22:15:23,012 INFO [RS:3;jenkins-hbase4:34803] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34803,1689632122825, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34803, sessionid=0x101755a8bb7000b 2023-07-17 22:15:23,012 DEBUG [RS:3;jenkins-hbase4:34803] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 22:15:23,012 DEBUG [RS:3;jenkins-hbase4:34803] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:23,012 DEBUG [RS:3;jenkins-hbase4:34803] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34803,1689632122825' 2023-07-17 22:15:23,012 DEBUG [RS:3;jenkins-hbase4:34803] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 22:15:23,013 DEBUG [RS:3;jenkins-hbase4:34803] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 22:15:23,014 DEBUG [RS:3;jenkins-hbase4:34803] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 22:15:23,014 DEBUG [RS:3;jenkins-hbase4:34803] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 22:15:23,014 DEBUG [RS:3;jenkins-hbase4:34803] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:23,014 DEBUG [RS:3;jenkins-hbase4:34803] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34803,1689632122825' 2023-07-17 22:15:23,014 DEBUG [RS:3;jenkins-hbase4:34803] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 22:15:23,014 DEBUG [RS:3;jenkins-hbase4:34803] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 22:15:23,015 DEBUG [RS:3;jenkins-hbase4:34803] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 22:15:23,015 INFO [RS:3;jenkins-hbase4:34803] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-17 22:15:23,015 INFO [RS:3;jenkins-hbase4:34803] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-17 22:15:23,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:23,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:23,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:23,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:23,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:23,038 DEBUG [hconnection-0x63551a-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 22:15:23,041 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57018, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 22:15:23,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:23,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:23,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43315] to rsgroup master 2023-07-17 22:15:23,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:23,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:58158 deadline: 1689633323060, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. 2023-07-17 22:15:23,063 WARN [Listener at localhost/37695] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:23,066 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:23,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:23,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:23,068 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:41625, jenkins-hbase4.apache.org:42021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:23,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:23,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:23,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:23,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:23,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:23,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:23,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:23,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:23,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:23,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:23,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:23,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:23,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:34647] to rsgroup Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:23,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:23,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:23,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:23,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:23,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-17 22:15:23,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34647,1689632118064, jenkins-hbase4.apache.org,34803,1689632122825] are moved back to default 2023-07-17 22:15:23,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:23,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:23,119 INFO [RS:3;jenkins-hbase4:34803] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34803%2C1689632122825, suffix=, logDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/WALs/jenkins-hbase4.apache.org,34803,1689632122825, archiveDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/oldWALs, maxLogs=32 2023-07-17 22:15:23,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:23,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:23,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:23,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:23,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:23,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-17 22:15:23,149 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 22:15:23,159 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44577,DS-3d412bac-d6ee-40f8-b24e-fa4cf5d7d6ec,DISK] 2023-07-17 22:15:23,162 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44355,DS-8d3ce80f-cbe8-4b6e-94f2-9c9f4fe1c3b4,DISK] 2023-07-17 22:15:23,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 12 2023-07-17 22:15:23,175 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45423,DS-92bde3d4-91ce-4c0f-9241-ef4d25e6ef6e,DISK] 2023-07-17 22:15:23,179 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:23,179 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:23,182 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:23,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-17 22:15:23,183 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:23,186 INFO [RS:3;jenkins-hbase4:34803] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/WALs/jenkins-hbase4.apache.org,34803,1689632122825/jenkins-hbase4.apache.org%2C34803%2C1689632122825.1689632123120 2023-07-17 22:15:23,189 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 22:15:23,200 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:23,201 DEBUG [RS:3;jenkins-hbase4:34803] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44355,DS-8d3ce80f-cbe8-4b6e-94f2-9c9f4fe1c3b4,DISK], DatanodeInfoWithStorage[127.0.0.1:45423,DS-92bde3d4-91ce-4c0f-9241-ef4d25e6ef6e,DISK], DatanodeInfoWithStorage[127.0.0.1:44577,DS-3d412bac-d6ee-40f8-b24e-fa4cf5d7d6ec,DISK]] 2023-07-17 22:15:23,207 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:23,210 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5 empty. 2023-07-17 22:15:23,210 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:23,212 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:23,212 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1 empty. 2023-07-17 22:15:23,213 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:23,212 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5 empty. 2023-07-17 22:15:23,213 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7 2023-07-17 22:15:23,214 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:23,214 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7 empty. 2023-07-17 22:15:23,215 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:23,215 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7 2023-07-17 22:15:23,216 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4 empty. 2023-07-17 22:15:23,216 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:23,216 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-17 22:15:23,269 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:23,278 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 49a1607434cef687a7711e7408b388a5, NAME => 'Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:23,280 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => a63aa27d98a8136d1a23449c4d291ee1, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:23,280 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 9bf428dad48a669ab21949b9996b2be5, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:23,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-17 22:15:23,380 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:23,381 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 9bf428dad48a669ab21949b9996b2be5, disabling compactions & flushes 2023-07-17 22:15:23,381 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. 2023-07-17 22:15:23,381 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. 2023-07-17 22:15:23,381 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. after waiting 0 ms 2023-07-17 22:15:23,381 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:23,381 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 49a1607434cef687a7711e7408b388a5, disabling compactions & flushes 2023-07-17 22:15:23,381 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. 2023-07-17 22:15:23,381 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. 2023-07-17 22:15:23,381 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. after waiting 0 ms 2023-07-17 22:15:23,381 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. 2023-07-17 22:15:23,382 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. 2023-07-17 22:15:23,382 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 49a1607434cef687a7711e7408b388a5: 2023-07-17 22:15:23,382 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 906632069deb4933b70a391e875e60d7, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:23,381 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:23,383 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing a63aa27d98a8136d1a23449c4d291ee1, disabling compactions & flushes 2023-07-17 22:15:23,383 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. 2023-07-17 22:15:23,383 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. 2023-07-17 22:15:23,383 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. after waiting 0 ms 2023-07-17 22:15:23,381 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. 2023-07-17 22:15:23,383 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. 2023-07-17 22:15:23,384 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. 2023-07-17 22:15:23,384 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for a63aa27d98a8136d1a23449c4d291ee1: 2023-07-17 22:15:23,384 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. 2023-07-17 22:15:23,384 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 9bf428dad48a669ab21949b9996b2be5: 2023-07-17 22:15:23,384 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 21fec2fd58f7374b6652b67ebdd179a4, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:23,449 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:23,450 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:23,451 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 21fec2fd58f7374b6652b67ebdd179a4, disabling compactions & flushes 2023-07-17 22:15:23,451 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 906632069deb4933b70a391e875e60d7, disabling compactions & flushes 2023-07-17 22:15:23,451 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. 2023-07-17 22:15:23,451 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. 2023-07-17 22:15:23,451 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. 2023-07-17 22:15:23,451 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. 2023-07-17 22:15:23,451 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. after waiting 0 ms 2023-07-17 22:15:23,451 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. after waiting 0 ms 2023-07-17 22:15:23,451 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. 2023-07-17 22:15:23,451 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. 2023-07-17 22:15:23,451 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. 2023-07-17 22:15:23,451 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. 2023-07-17 22:15:23,451 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 21fec2fd58f7374b6652b67ebdd179a4: 2023-07-17 22:15:23,451 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 906632069deb4933b70a391e875e60d7: 2023-07-17 22:15:23,456 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 22:15:23,457 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632123457"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632123457"}]},"ts":"1689632123457"} 2023-07-17 22:15:23,457 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632123457"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632123457"}]},"ts":"1689632123457"} 2023-07-17 22:15:23,457 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632123457"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632123457"}]},"ts":"1689632123457"} 2023-07-17 22:15:23,457 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632123457"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632123457"}]},"ts":"1689632123457"} 2023-07-17 22:15:23,458 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689632123131.906632069deb4933b70a391e875e60d7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632123457"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632123457"}]},"ts":"1689632123457"} 2023-07-17 22:15:23,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-17 22:15:23,514 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-17 22:15:23,516 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 22:15:23,516 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632123516"}]},"ts":"1689632123516"} 2023-07-17 22:15:23,519 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-17 22:15:23,527 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:23,528 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:23,528 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:23,528 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:23,528 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49a1607434cef687a7711e7408b388a5, ASSIGN}, {pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bf428dad48a669ab21949b9996b2be5, ASSIGN}, {pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a63aa27d98a8136d1a23449c4d291ee1, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=906632069deb4933b70a391e875e60d7, ASSIGN}, {pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=21fec2fd58f7374b6652b67ebdd179a4, ASSIGN}] 2023-07-17 22:15:23,531 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=21fec2fd58f7374b6652b67ebdd179a4, ASSIGN 2023-07-17 22:15:23,532 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=906632069deb4933b70a391e875e60d7, ASSIGN 2023-07-17 22:15:23,532 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a63aa27d98a8136d1a23449c4d291ee1, ASSIGN 2023-07-17 22:15:23,532 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bf428dad48a669ab21949b9996b2be5, ASSIGN 2023-07-17 22:15:23,534 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49a1607434cef687a7711e7408b388a5, ASSIGN 2023-07-17 22:15:23,534 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=21fec2fd58f7374b6652b67ebdd179a4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42021,1689632117931; forceNewPlan=false, retain=false 2023-07-17 22:15:23,534 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a63aa27d98a8136d1a23449c4d291ee1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42021,1689632117931; forceNewPlan=false, retain=false 2023-07-17 22:15:23,534 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=906632069deb4933b70a391e875e60d7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41625,1689632118141; forceNewPlan=false, retain=false 2023-07-17 22:15:23,534 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bf428dad48a669ab21949b9996b2be5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41625,1689632118141; forceNewPlan=false, retain=false 2023-07-17 22:15:23,535 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49a1607434cef687a7711e7408b388a5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41625,1689632118141; forceNewPlan=false, retain=false 2023-07-17 22:15:23,685 INFO [jenkins-hbase4:43315] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-17 22:15:23,688 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=a63aa27d98a8136d1a23449c4d291ee1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:23,688 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=49a1607434cef687a7711e7408b388a5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:23,688 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=906632069deb4933b70a391e875e60d7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:23,688 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632123688"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632123688"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632123688"}]},"ts":"1689632123688"} 2023-07-17 22:15:23,688 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689632123131.906632069deb4933b70a391e875e60d7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632123688"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632123688"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632123688"}]},"ts":"1689632123688"} 2023-07-17 22:15:23,688 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632123688"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632123688"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632123688"}]},"ts":"1689632123688"} 2023-07-17 22:15:23,688 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=21fec2fd58f7374b6652b67ebdd179a4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:23,688 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=9bf428dad48a669ab21949b9996b2be5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:23,689 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632123688"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632123688"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632123688"}]},"ts":"1689632123688"} 2023-07-17 22:15:23,689 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632123688"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632123688"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632123688"}]},"ts":"1689632123688"} 2023-07-17 22:15:23,692 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=16, state=RUNNABLE; OpenRegionProcedure 906632069deb4933b70a391e875e60d7, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:23,694 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=15, state=RUNNABLE; OpenRegionProcedure a63aa27d98a8136d1a23449c4d291ee1, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:23,695 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=13, state=RUNNABLE; OpenRegionProcedure 49a1607434cef687a7711e7408b388a5, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:23,698 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=17, state=RUNNABLE; OpenRegionProcedure 21fec2fd58f7374b6652b67ebdd179a4, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:23,703 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=14, state=RUNNABLE; OpenRegionProcedure 9bf428dad48a669ab21949b9996b2be5, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:23,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-17 22:15:23,849 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:23,849 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 22:15:23,854 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. 2023-07-17 22:15:23,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9bf428dad48a669ab21949b9996b2be5, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-17 22:15:23,854 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39996, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 22:15:23,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:23,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:23,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:23,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:23,859 INFO [StoreOpener-9bf428dad48a669ab21949b9996b2be5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:23,859 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. 2023-07-17 22:15:23,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a63aa27d98a8136d1a23449c4d291ee1, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-17 22:15:23,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:23,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:23,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:23,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:23,861 DEBUG [StoreOpener-9bf428dad48a669ab21949b9996b2be5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5/f 2023-07-17 22:15:23,861 DEBUG [StoreOpener-9bf428dad48a669ab21949b9996b2be5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5/f 2023-07-17 22:15:23,862 INFO [StoreOpener-9bf428dad48a669ab21949b9996b2be5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9bf428dad48a669ab21949b9996b2be5 columnFamilyName f 2023-07-17 22:15:23,863 INFO [StoreOpener-a63aa27d98a8136d1a23449c4d291ee1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:23,863 INFO [StoreOpener-9bf428dad48a669ab21949b9996b2be5-1] regionserver.HStore(310): Store=9bf428dad48a669ab21949b9996b2be5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:23,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:23,867 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:23,867 DEBUG [StoreOpener-a63aa27d98a8136d1a23449c4d291ee1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1/f 2023-07-17 22:15:23,867 DEBUG [StoreOpener-a63aa27d98a8136d1a23449c4d291ee1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1/f 2023-07-17 22:15:23,868 INFO [StoreOpener-a63aa27d98a8136d1a23449c4d291ee1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a63aa27d98a8136d1a23449c4d291ee1 columnFamilyName f 2023-07-17 22:15:23,869 INFO [StoreOpener-a63aa27d98a8136d1a23449c4d291ee1-1] regionserver.HStore(310): Store=a63aa27d98a8136d1a23449c4d291ee1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:23,870 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:23,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:23,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:23,879 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:23,880 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:23,881 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9bf428dad48a669ab21949b9996b2be5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10701747040, jitterRate=-0.0033221393823623657}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:23,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9bf428dad48a669ab21949b9996b2be5: 2023-07-17 22:15:23,882 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5., pid=22, masterSystemTime=1689632123845 2023-07-17 22:15:23,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. 2023-07-17 22:15:23,885 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. 2023-07-17 22:15:23,885 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. 2023-07-17 22:15:23,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 49a1607434cef687a7711e7408b388a5, NAME => 'Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-17 22:15:23,885 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=9bf428dad48a669ab21949b9996b2be5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:23,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:23,886 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632123885"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632123885"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632123885"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632123885"}]},"ts":"1689632123885"} 2023-07-17 22:15:23,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:23,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:23,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:23,892 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=14 2023-07-17 22:15:23,895 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bf428dad48a669ab21949b9996b2be5, ASSIGN in 364 msec 2023-07-17 22:15:23,898 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=14, state=SUCCESS; OpenRegionProcedure 9bf428dad48a669ab21949b9996b2be5, server=jenkins-hbase4.apache.org,41625,1689632118141 in 186 msec 2023-07-17 22:15:23,898 INFO [StoreOpener-49a1607434cef687a7711e7408b388a5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:23,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:23,900 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a63aa27d98a8136d1a23449c4d291ee1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11985880160, jitterRate=0.11627207696437836}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:23,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a63aa27d98a8136d1a23449c4d291ee1: 2023-07-17 22:15:23,902 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1., pid=19, masterSystemTime=1689632123849 2023-07-17 22:15:23,903 DEBUG [StoreOpener-49a1607434cef687a7711e7408b388a5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5/f 2023-07-17 22:15:23,903 DEBUG [StoreOpener-49a1607434cef687a7711e7408b388a5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5/f 2023-07-17 22:15:23,906 INFO [StoreOpener-49a1607434cef687a7711e7408b388a5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 49a1607434cef687a7711e7408b388a5 columnFamilyName f 2023-07-17 22:15:23,907 INFO [StoreOpener-49a1607434cef687a7711e7408b388a5-1] regionserver.HStore(310): Store=49a1607434cef687a7711e7408b388a5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:23,907 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. 2023-07-17 22:15:23,908 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. 2023-07-17 22:15:23,908 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. 2023-07-17 22:15:23,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 21fec2fd58f7374b6652b67ebdd179a4, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-17 22:15:23,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:23,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:23,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:23,909 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=a63aa27d98a8136d1a23449c4d291ee1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:23,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:23,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:23,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:23,910 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632123909"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632123909"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632123909"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632123909"}]},"ts":"1689632123909"} 2023-07-17 22:15:23,914 INFO [StoreOpener-21fec2fd58f7374b6652b67ebdd179a4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:23,916 DEBUG [StoreOpener-21fec2fd58f7374b6652b67ebdd179a4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4/f 2023-07-17 22:15:23,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:23,916 DEBUG [StoreOpener-21fec2fd58f7374b6652b67ebdd179a4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4/f 2023-07-17 22:15:23,918 INFO [StoreOpener-21fec2fd58f7374b6652b67ebdd179a4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 21fec2fd58f7374b6652b67ebdd179a4 columnFamilyName f 2023-07-17 22:15:23,918 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=15 2023-07-17 22:15:23,918 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=15, state=SUCCESS; OpenRegionProcedure a63aa27d98a8136d1a23449c4d291ee1, server=jenkins-hbase4.apache.org,42021,1689632117931 in 220 msec 2023-07-17 22:15:23,918 INFO [StoreOpener-21fec2fd58f7374b6652b67ebdd179a4-1] regionserver.HStore(310): Store=21fec2fd58f7374b6652b67ebdd179a4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:23,920 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:23,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:23,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:23,922 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 49a1607434cef687a7711e7408b388a5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9446113440, jitterRate=-0.12026213109493256}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:23,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 49a1607434cef687a7711e7408b388a5: 2023-07-17 22:15:23,923 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5., pid=20, masterSystemTime=1689632123845 2023-07-17 22:15:23,923 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a63aa27d98a8136d1a23449c4d291ee1, ASSIGN in 390 msec 2023-07-17 22:15:23,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. 2023-07-17 22:15:23,925 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. 2023-07-17 22:15:23,926 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. 2023-07-17 22:15:23,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 906632069deb4933b70a391e875e60d7, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-17 22:15:23,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:23,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 906632069deb4933b70a391e875e60d7 2023-07-17 22:15:23,926 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=49a1607434cef687a7711e7408b388a5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:23,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:23,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 906632069deb4933b70a391e875e60d7 2023-07-17 22:15:23,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 906632069deb4933b70a391e875e60d7 2023-07-17 22:15:23,927 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632123926"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632123926"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632123926"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632123926"}]},"ts":"1689632123926"} 2023-07-17 22:15:23,929 INFO [StoreOpener-906632069deb4933b70a391e875e60d7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 906632069deb4933b70a391e875e60d7 2023-07-17 22:15:23,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:23,932 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 21fec2fd58f7374b6652b67ebdd179a4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10121396320, jitterRate=-0.057371512055397034}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:23,933 DEBUG [StoreOpener-906632069deb4933b70a391e875e60d7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7/f 2023-07-17 22:15:23,933 DEBUG [StoreOpener-906632069deb4933b70a391e875e60d7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7/f 2023-07-17 22:15:23,933 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=13 2023-07-17 22:15:23,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 21fec2fd58f7374b6652b67ebdd179a4: 2023-07-17 22:15:23,933 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=13, state=SUCCESS; OpenRegionProcedure 49a1607434cef687a7711e7408b388a5, server=jenkins-hbase4.apache.org,41625,1689632118141 in 234 msec 2023-07-17 22:15:23,933 INFO [StoreOpener-906632069deb4933b70a391e875e60d7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 906632069deb4933b70a391e875e60d7 columnFamilyName f 2023-07-17 22:15:23,934 INFO [StoreOpener-906632069deb4933b70a391e875e60d7-1] regionserver.HStore(310): Store=906632069deb4933b70a391e875e60d7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:23,935 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4., pid=21, masterSystemTime=1689632123849 2023-07-17 22:15:23,940 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49a1607434cef687a7711e7408b388a5, ASSIGN in 405 msec 2023-07-17 22:15:23,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7 2023-07-17 22:15:23,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7 2023-07-17 22:15:23,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. 2023-07-17 22:15:23,941 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. 2023-07-17 22:15:23,942 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=21fec2fd58f7374b6652b67ebdd179a4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:23,942 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632123942"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632123942"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632123942"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632123942"}]},"ts":"1689632123942"} 2023-07-17 22:15:23,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 906632069deb4933b70a391e875e60d7 2023-07-17 22:15:23,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:23,949 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 906632069deb4933b70a391e875e60d7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10366989760, jitterRate=-0.03449884057044983}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:23,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 906632069deb4933b70a391e875e60d7: 2023-07-17 22:15:23,950 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7., pid=18, masterSystemTime=1689632123845 2023-07-17 22:15:23,952 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=17 2023-07-17 22:15:23,953 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=17, state=SUCCESS; OpenRegionProcedure 21fec2fd58f7374b6652b67ebdd179a4, server=jenkins-hbase4.apache.org,42021,1689632117931 in 250 msec 2023-07-17 22:15:23,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. 2023-07-17 22:15:23,954 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. 2023-07-17 22:15:23,955 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=906632069deb4933b70a391e875e60d7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:23,955 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689632123131.906632069deb4933b70a391e875e60d7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632123955"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632123955"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632123955"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632123955"}]},"ts":"1689632123955"} 2023-07-17 22:15:23,956 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=21fec2fd58f7374b6652b67ebdd179a4, ASSIGN in 425 msec 2023-07-17 22:15:23,961 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=16 2023-07-17 22:15:23,962 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=16, state=SUCCESS; OpenRegionProcedure 906632069deb4933b70a391e875e60d7, server=jenkins-hbase4.apache.org,41625,1689632118141 in 266 msec 2023-07-17 22:15:23,964 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=12 2023-07-17 22:15:23,964 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=906632069deb4933b70a391e875e60d7, ASSIGN in 433 msec 2023-07-17 22:15:23,965 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 22:15:23,966 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632123966"}]},"ts":"1689632123966"} 2023-07-17 22:15:23,968 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-17 22:15:23,971 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 22:15:23,975 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 831 msec 2023-07-17 22:15:24,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-17 22:15:24,303 INFO [Listener at localhost/37695] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 12 completed 2023-07-17 22:15:24,303 DEBUG [Listener at localhost/37695] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-17 22:15:24,304 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:24,311 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-17 22:15:24,312 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:24,313 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-17 22:15:24,313 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:24,319 DEBUG [Listener at localhost/37695] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 22:15:24,324 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35104, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 22:15:24,327 DEBUG [Listener at localhost/37695] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 22:15:24,332 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46356, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 22:15:24,333 DEBUG [Listener at localhost/37695] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 22:15:24,337 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57032, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 22:15:24,339 DEBUG [Listener at localhost/37695] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 22:15:24,349 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40012, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 22:15:24,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-17 22:15:24,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 22:15:24,363 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:24,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:24,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:24,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:24,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:24,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:24,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:24,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(345): Moving region 49a1607434cef687a7711e7408b388a5 to RSGroup Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:24,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:24,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:24,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:24,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:24,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:24,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49a1607434cef687a7711e7408b388a5, REOPEN/MOVE 2023-07-17 22:15:24,391 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49a1607434cef687a7711e7408b388a5, REOPEN/MOVE 2023-07-17 22:15:24,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(345): Moving region 9bf428dad48a669ab21949b9996b2be5 to RSGroup Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:24,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:24,393 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=49a1607434cef687a7711e7408b388a5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:24,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:24,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:24,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:24,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:24,393 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632124393"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632124393"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632124393"}]},"ts":"1689632124393"} 2023-07-17 22:15:24,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bf428dad48a669ab21949b9996b2be5, REOPEN/MOVE 2023-07-17 22:15:24,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(345): Moving region a63aa27d98a8136d1a23449c4d291ee1 to RSGroup Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:24,397 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bf428dad48a669ab21949b9996b2be5, REOPEN/MOVE 2023-07-17 22:15:24,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:24,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:24,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:24,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:24,398 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=23, state=RUNNABLE; CloseRegionProcedure 49a1607434cef687a7711e7408b388a5, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:24,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:24,398 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=9bf428dad48a669ab21949b9996b2be5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:24,398 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632124398"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632124398"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632124398"}]},"ts":"1689632124398"} 2023-07-17 22:15:24,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a63aa27d98a8136d1a23449c4d291ee1, REOPEN/MOVE 2023-07-17 22:15:24,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(345): Moving region 906632069deb4933b70a391e875e60d7 to RSGroup Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:24,413 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a63aa27d98a8136d1a23449c4d291ee1, REOPEN/MOVE 2023-07-17 22:15:24,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:24,413 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=24, state=RUNNABLE; CloseRegionProcedure 9bf428dad48a669ab21949b9996b2be5, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:24,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:24,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:24,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:24,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:24,415 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=a63aa27d98a8136d1a23449c4d291ee1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:24,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=906632069deb4933b70a391e875e60d7, REOPEN/MOVE 2023-07-17 22:15:24,417 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632124415"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632124415"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632124415"}]},"ts":"1689632124415"} 2023-07-17 22:15:24,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(345): Moving region 21fec2fd58f7374b6652b67ebdd179a4 to RSGroup Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:24,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:24,418 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=906632069deb4933b70a391e875e60d7, REOPEN/MOVE 2023-07-17 22:15:24,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:24,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:24,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:24,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:24,421 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=906632069deb4933b70a391e875e60d7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:24,421 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=26, state=RUNNABLE; CloseRegionProcedure a63aa27d98a8136d1a23449c4d291ee1, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:24,421 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689632123131.906632069deb4933b70a391e875e60d7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632124421"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632124421"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632124421"}]},"ts":"1689632124421"} 2023-07-17 22:15:24,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=21fec2fd58f7374b6652b67ebdd179a4, REOPEN/MOVE 2023-07-17 22:15:24,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_782668524, current retry=0 2023-07-17 22:15:24,424 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=21fec2fd58f7374b6652b67ebdd179a4, REOPEN/MOVE 2023-07-17 22:15:24,425 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=28, state=RUNNABLE; CloseRegionProcedure 906632069deb4933b70a391e875e60d7, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:24,425 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=21fec2fd58f7374b6652b67ebdd179a4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:24,426 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632124425"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632124425"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632124425"}]},"ts":"1689632124425"} 2023-07-17 22:15:24,429 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=29, state=RUNNABLE; CloseRegionProcedure 21fec2fd58f7374b6652b67ebdd179a4, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:24,569 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:24,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9bf428dad48a669ab21949b9996b2be5, disabling compactions & flushes 2023-07-17 22:15:24,571 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. 2023-07-17 22:15:24,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. 2023-07-17 22:15:24,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. after waiting 0 ms 2023-07-17 22:15:24,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. 2023-07-17 22:15:24,576 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:24,577 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 21fec2fd58f7374b6652b67ebdd179a4, disabling compactions & flushes 2023-07-17 22:15:24,577 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. 2023-07-17 22:15:24,577 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. 2023-07-17 22:15:24,577 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. after waiting 0 ms 2023-07-17 22:15:24,577 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. 2023-07-17 22:15:24,577 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:24,578 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. 2023-07-17 22:15:24,579 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9bf428dad48a669ab21949b9996b2be5: 2023-07-17 22:15:24,579 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9bf428dad48a669ab21949b9996b2be5 move to jenkins-hbase4.apache.org,34803,1689632122825 record at close sequenceid=2 2023-07-17 22:15:24,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:24,583 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. 2023-07-17 22:15:24,583 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 21fec2fd58f7374b6652b67ebdd179a4: 2023-07-17 22:15:24,583 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 21fec2fd58f7374b6652b67ebdd179a4 move to jenkins-hbase4.apache.org,34803,1689632122825 record at close sequenceid=2 2023-07-17 22:15:24,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:24,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:24,585 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 49a1607434cef687a7711e7408b388a5, disabling compactions & flushes 2023-07-17 22:15:24,586 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. 2023-07-17 22:15:24,586 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. 2023-07-17 22:15:24,586 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. after waiting 0 ms 2023-07-17 22:15:24,586 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. 2023-07-17 22:15:24,586 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:24,586 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:24,587 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a63aa27d98a8136d1a23449c4d291ee1, disabling compactions & flushes 2023-07-17 22:15:24,587 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. 2023-07-17 22:15:24,587 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. 2023-07-17 22:15:24,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. after waiting 0 ms 2023-07-17 22:15:24,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. 2023-07-17 22:15:24,588 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=21fec2fd58f7374b6652b67ebdd179a4, regionState=CLOSED 2023-07-17 22:15:24,588 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=9bf428dad48a669ab21949b9996b2be5, regionState=CLOSED 2023-07-17 22:15:24,588 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632124588"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632124588"}]},"ts":"1689632124588"} 2023-07-17 22:15:24,588 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632124588"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632124588"}]},"ts":"1689632124588"} 2023-07-17 22:15:24,598 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=29 2023-07-17 22:15:24,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:24,600 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=29, state=SUCCESS; CloseRegionProcedure 21fec2fd58f7374b6652b67ebdd179a4, server=jenkins-hbase4.apache.org,42021,1689632117931 in 162 msec 2023-07-17 22:15:24,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. 2023-07-17 22:15:24,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a63aa27d98a8136d1a23449c4d291ee1: 2023-07-17 22:15:24,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding a63aa27d98a8136d1a23449c4d291ee1 move to jenkins-hbase4.apache.org,34803,1689632122825 record at close sequenceid=2 2023-07-17 22:15:24,602 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=21fec2fd58f7374b6652b67ebdd179a4, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34803,1689632122825; forceNewPlan=false, retain=false 2023-07-17 22:15:24,603 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:24,604 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=24 2023-07-17 22:15:24,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. 2023-07-17 22:15:24,604 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=24, state=SUCCESS; CloseRegionProcedure 9bf428dad48a669ab21949b9996b2be5, server=jenkins-hbase4.apache.org,41625,1689632118141 in 182 msec 2023-07-17 22:15:24,604 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 49a1607434cef687a7711e7408b388a5: 2023-07-17 22:15:24,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 49a1607434cef687a7711e7408b388a5 move to jenkins-hbase4.apache.org,34803,1689632122825 record at close sequenceid=2 2023-07-17 22:15:24,606 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bf428dad48a669ab21949b9996b2be5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34803,1689632122825; forceNewPlan=false, retain=false 2023-07-17 22:15:24,606 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:24,608 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=a63aa27d98a8136d1a23449c4d291ee1, regionState=CLOSED 2023-07-17 22:15:24,609 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632124608"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632124608"}]},"ts":"1689632124608"} 2023-07-17 22:15:24,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:24,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 906632069deb4933b70a391e875e60d7 2023-07-17 22:15:24,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 906632069deb4933b70a391e875e60d7, disabling compactions & flushes 2023-07-17 22:15:24,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. 2023-07-17 22:15:24,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. 2023-07-17 22:15:24,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. after waiting 0 ms 2023-07-17 22:15:24,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. 2023-07-17 22:15:24,613 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=49a1607434cef687a7711e7408b388a5, regionState=CLOSED 2023-07-17 22:15:24,613 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632124613"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632124613"}]},"ts":"1689632124613"} 2023-07-17 22:15:24,620 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:24,621 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=26 2023-07-17 22:15:24,621 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=26, state=SUCCESS; CloseRegionProcedure a63aa27d98a8136d1a23449c4d291ee1, server=jenkins-hbase4.apache.org,42021,1689632117931 in 191 msec 2023-07-17 22:15:24,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. 2023-07-17 22:15:24,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 906632069deb4933b70a391e875e60d7: 2023-07-17 22:15:24,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 906632069deb4933b70a391e875e60d7 move to jenkins-hbase4.apache.org,34647,1689632118064 record at close sequenceid=2 2023-07-17 22:15:24,624 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a63aa27d98a8136d1a23449c4d291ee1, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34803,1689632122825; forceNewPlan=false, retain=false 2023-07-17 22:15:24,625 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=23 2023-07-17 22:15:24,625 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=23, state=SUCCESS; CloseRegionProcedure 49a1607434cef687a7711e7408b388a5, server=jenkins-hbase4.apache.org,41625,1689632118141 in 221 msec 2023-07-17 22:15:24,625 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 906632069deb4933b70a391e875e60d7 2023-07-17 22:15:24,626 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=906632069deb4933b70a391e875e60d7, regionState=CLOSED 2023-07-17 22:15:24,627 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49a1607434cef687a7711e7408b388a5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34803,1689632122825; forceNewPlan=false, retain=false 2023-07-17 22:15:24,627 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689632123131.906632069deb4933b70a391e875e60d7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632124626"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632124626"}]},"ts":"1689632124626"} 2023-07-17 22:15:24,632 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=28 2023-07-17 22:15:24,632 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=28, state=SUCCESS; CloseRegionProcedure 906632069deb4933b70a391e875e60d7, server=jenkins-hbase4.apache.org,41625,1689632118141 in 204 msec 2023-07-17 22:15:24,634 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=906632069deb4933b70a391e875e60d7, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34647,1689632118064; forceNewPlan=false, retain=false 2023-07-17 22:15:24,753 INFO [jenkins-hbase4:43315] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-17 22:15:24,754 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=21fec2fd58f7374b6652b67ebdd179a4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:24,754 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=906632069deb4933b70a391e875e60d7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:24,754 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=49a1607434cef687a7711e7408b388a5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:24,754 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632124754"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632124754"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632124754"}]},"ts":"1689632124754"} 2023-07-17 22:15:24,754 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632124754"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632124754"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632124754"}]},"ts":"1689632124754"} 2023-07-17 22:15:24,754 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=a63aa27d98a8136d1a23449c4d291ee1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:24,754 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=9bf428dad48a669ab21949b9996b2be5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:24,754 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689632123131.906632069deb4933b70a391e875e60d7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632124754"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632124754"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632124754"}]},"ts":"1689632124754"} 2023-07-17 22:15:24,754 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632124754"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632124754"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632124754"}]},"ts":"1689632124754"} 2023-07-17 22:15:24,754 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632124754"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632124754"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632124754"}]},"ts":"1689632124754"} 2023-07-17 22:15:24,757 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=29, state=RUNNABLE; OpenRegionProcedure 21fec2fd58f7374b6652b67ebdd179a4, server=jenkins-hbase4.apache.org,34803,1689632122825}] 2023-07-17 22:15:24,758 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=23, state=RUNNABLE; OpenRegionProcedure 49a1607434cef687a7711e7408b388a5, server=jenkins-hbase4.apache.org,34803,1689632122825}] 2023-07-17 22:15:24,760 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=28, state=RUNNABLE; OpenRegionProcedure 906632069deb4933b70a391e875e60d7, server=jenkins-hbase4.apache.org,34647,1689632118064}] 2023-07-17 22:15:24,764 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=26, state=RUNNABLE; OpenRegionProcedure a63aa27d98a8136d1a23449c4d291ee1, server=jenkins-hbase4.apache.org,34803,1689632122825}] 2023-07-17 22:15:24,768 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=24, state=RUNNABLE; OpenRegionProcedure 9bf428dad48a669ab21949b9996b2be5, server=jenkins-hbase4.apache.org,34803,1689632122825}] 2023-07-17 22:15:24,911 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:24,912 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 22:15:24,913 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46368, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 22:15:24,919 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:24,919 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 22:15:24,923 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. 2023-07-17 22:15:24,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 49a1607434cef687a7711e7408b388a5, NAME => 'Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-17 22:15:24,925 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35108, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 22:15:24,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:24,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:24,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:24,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:24,928 INFO [StoreOpener-49a1607434cef687a7711e7408b388a5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:24,929 DEBUG [StoreOpener-49a1607434cef687a7711e7408b388a5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5/f 2023-07-17 22:15:24,929 DEBUG [StoreOpener-49a1607434cef687a7711e7408b388a5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5/f 2023-07-17 22:15:24,930 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. 2023-07-17 22:15:24,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 906632069deb4933b70a391e875e60d7, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-17 22:15:24,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 906632069deb4933b70a391e875e60d7 2023-07-17 22:15:24,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:24,930 INFO [StoreOpener-49a1607434cef687a7711e7408b388a5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 49a1607434cef687a7711e7408b388a5 columnFamilyName f 2023-07-17 22:15:24,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 906632069deb4933b70a391e875e60d7 2023-07-17 22:15:24,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 906632069deb4933b70a391e875e60d7 2023-07-17 22:15:24,931 INFO [StoreOpener-49a1607434cef687a7711e7408b388a5-1] regionserver.HStore(310): Store=49a1607434cef687a7711e7408b388a5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:24,933 INFO [StoreOpener-906632069deb4933b70a391e875e60d7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 906632069deb4933b70a391e875e60d7 2023-07-17 22:15:24,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:24,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:24,936 DEBUG [StoreOpener-906632069deb4933b70a391e875e60d7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7/f 2023-07-17 22:15:24,936 DEBUG [StoreOpener-906632069deb4933b70a391e875e60d7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7/f 2023-07-17 22:15:24,938 INFO [StoreOpener-906632069deb4933b70a391e875e60d7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 906632069deb4933b70a391e875e60d7 columnFamilyName f 2023-07-17 22:15:24,939 INFO [StoreOpener-906632069deb4933b70a391e875e60d7-1] regionserver.HStore(310): Store=906632069deb4933b70a391e875e60d7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:24,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:24,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7 2023-07-17 22:15:24,942 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 49a1607434cef687a7711e7408b388a5; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10681365280, jitterRate=-0.005220338702201843}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:24,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 49a1607434cef687a7711e7408b388a5: 2023-07-17 22:15:24,943 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7 2023-07-17 22:15:24,945 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5., pid=34, masterSystemTime=1689632124911 2023-07-17 22:15:24,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. 2023-07-17 22:15:24,950 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. 2023-07-17 22:15:24,950 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. 2023-07-17 22:15:24,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9bf428dad48a669ab21949b9996b2be5, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-17 22:15:24,951 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=49a1607434cef687a7711e7408b388a5, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:24,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:24,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:24,951 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632124951"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632124951"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632124951"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632124951"}]},"ts":"1689632124951"} 2023-07-17 22:15:24,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:24,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:24,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 906632069deb4933b70a391e875e60d7 2023-07-17 22:15:24,954 INFO [StoreOpener-9bf428dad48a669ab21949b9996b2be5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:24,954 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 906632069deb4933b70a391e875e60d7; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10289884800, jitterRate=-0.04167979955673218}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:24,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 906632069deb4933b70a391e875e60d7: 2023-07-17 22:15:24,956 DEBUG [StoreOpener-9bf428dad48a669ab21949b9996b2be5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5/f 2023-07-17 22:15:24,956 DEBUG [StoreOpener-9bf428dad48a669ab21949b9996b2be5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5/f 2023-07-17 22:15:24,957 INFO [StoreOpener-9bf428dad48a669ab21949b9996b2be5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9bf428dad48a669ab21949b9996b2be5 columnFamilyName f 2023-07-17 22:15:24,957 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7., pid=35, masterSystemTime=1689632124919 2023-07-17 22:15:24,959 INFO [StoreOpener-9bf428dad48a669ab21949b9996b2be5-1] regionserver.HStore(310): Store=9bf428dad48a669ab21949b9996b2be5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:24,961 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=23 2023-07-17 22:15:24,961 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=23, state=SUCCESS; OpenRegionProcedure 49a1607434cef687a7711e7408b388a5, server=jenkins-hbase4.apache.org,34803,1689632122825 in 196 msec 2023-07-17 22:15:24,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:24,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. 2023-07-17 22:15:24,963 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. 2023-07-17 22:15:24,963 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=906632069deb4933b70a391e875e60d7, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:24,964 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689632123131.906632069deb4933b70a391e875e60d7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632124963"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632124963"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632124963"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632124963"}]},"ts":"1689632124963"} 2023-07-17 22:15:24,964 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49a1607434cef687a7711e7408b388a5, REOPEN/MOVE in 573 msec 2023-07-17 22:15:24,965 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:24,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:24,972 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9bf428dad48a669ab21949b9996b2be5; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9659234240, jitterRate=-0.10041370987892151}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:24,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9bf428dad48a669ab21949b9996b2be5: 2023-07-17 22:15:24,973 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5., pid=37, masterSystemTime=1689632124911 2023-07-17 22:15:24,973 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=28 2023-07-17 22:15:24,973 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=28, state=SUCCESS; OpenRegionProcedure 906632069deb4933b70a391e875e60d7, server=jenkins-hbase4.apache.org,34647,1689632118064 in 206 msec 2023-07-17 22:15:24,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. 2023-07-17 22:15:24,975 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. 2023-07-17 22:15:24,975 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. 2023-07-17 22:15:24,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 21fec2fd58f7374b6652b67ebdd179a4, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-17 22:15:24,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:24,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:24,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:24,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:24,976 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=28, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=906632069deb4933b70a391e875e60d7, REOPEN/MOVE in 559 msec 2023-07-17 22:15:24,976 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=9bf428dad48a669ab21949b9996b2be5, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:24,977 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632124976"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632124976"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632124976"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632124976"}]},"ts":"1689632124976"} 2023-07-17 22:15:24,978 INFO [StoreOpener-21fec2fd58f7374b6652b67ebdd179a4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:24,980 DEBUG [StoreOpener-21fec2fd58f7374b6652b67ebdd179a4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4/f 2023-07-17 22:15:24,980 DEBUG [StoreOpener-21fec2fd58f7374b6652b67ebdd179a4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4/f 2023-07-17 22:15:24,981 INFO [StoreOpener-21fec2fd58f7374b6652b67ebdd179a4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 21fec2fd58f7374b6652b67ebdd179a4 columnFamilyName f 2023-07-17 22:15:24,982 INFO [StoreOpener-21fec2fd58f7374b6652b67ebdd179a4-1] regionserver.HStore(310): Store=21fec2fd58f7374b6652b67ebdd179a4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:24,983 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=24 2023-07-17 22:15:24,985 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=24, state=SUCCESS; OpenRegionProcedure 9bf428dad48a669ab21949b9996b2be5, server=jenkins-hbase4.apache.org,34803,1689632122825 in 211 msec 2023-07-17 22:15:24,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:24,987 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bf428dad48a669ab21949b9996b2be5, REOPEN/MOVE in 592 msec 2023-07-17 22:15:24,987 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:24,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:24,994 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 21fec2fd58f7374b6652b67ebdd179a4; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10537286720, jitterRate=-0.018638700246810913}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:24,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 21fec2fd58f7374b6652b67ebdd179a4: 2023-07-17 22:15:24,995 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4., pid=33, masterSystemTime=1689632124911 2023-07-17 22:15:24,997 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. 2023-07-17 22:15:24,998 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. 2023-07-17 22:15:24,998 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. 2023-07-17 22:15:24,998 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a63aa27d98a8136d1a23449c4d291ee1, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-17 22:15:24,998 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=21fec2fd58f7374b6652b67ebdd179a4, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:24,998 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:24,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:24,999 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632124998"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632124998"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632124998"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632124998"}]},"ts":"1689632124998"} 2023-07-17 22:15:24,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:24,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:25,001 INFO [StoreOpener-a63aa27d98a8136d1a23449c4d291ee1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:25,003 DEBUG [StoreOpener-a63aa27d98a8136d1a23449c4d291ee1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1/f 2023-07-17 22:15:25,003 DEBUG [StoreOpener-a63aa27d98a8136d1a23449c4d291ee1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1/f 2023-07-17 22:15:25,004 INFO [StoreOpener-a63aa27d98a8136d1a23449c4d291ee1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a63aa27d98a8136d1a23449c4d291ee1 columnFamilyName f 2023-07-17 22:15:25,005 INFO [StoreOpener-a63aa27d98a8136d1a23449c4d291ee1-1] regionserver.HStore(310): Store=a63aa27d98a8136d1a23449c4d291ee1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:25,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:25,007 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=29 2023-07-17 22:15:25,007 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=29, state=SUCCESS; OpenRegionProcedure 21fec2fd58f7374b6652b67ebdd179a4, server=jenkins-hbase4.apache.org,34803,1689632122825 in 245 msec 2023-07-17 22:15:25,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:25,012 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=21fec2fd58f7374b6652b67ebdd179a4, REOPEN/MOVE in 588 msec 2023-07-17 22:15:25,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:25,017 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a63aa27d98a8136d1a23449c4d291ee1; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10764525920, jitterRate=0.0025245994329452515}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:25,017 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a63aa27d98a8136d1a23449c4d291ee1: 2023-07-17 22:15:25,018 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1., pid=36, masterSystemTime=1689632124911 2023-07-17 22:15:25,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. 2023-07-17 22:15:25,021 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. 2023-07-17 22:15:25,021 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=a63aa27d98a8136d1a23449c4d291ee1, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:25,022 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632125021"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632125021"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632125021"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632125021"}]},"ts":"1689632125021"} 2023-07-17 22:15:25,027 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=26 2023-07-17 22:15:25,027 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=26, state=SUCCESS; OpenRegionProcedure a63aa27d98a8136d1a23449c4d291ee1, server=jenkins-hbase4.apache.org,34803,1689632122825 in 260 msec 2023-07-17 22:15:25,029 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a63aa27d98a8136d1a23449c4d291ee1, REOPEN/MOVE in 629 msec 2023-07-17 22:15:25,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure.ProcedureSyncWait(216): waitFor pid=23 2023-07-17 22:15:25,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_782668524. 2023-07-17 22:15:25,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:25,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:25,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:25,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-17 22:15:25,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 22:15:25,441 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:25,450 INFO [Listener at localhost/37695] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-17 22:15:25,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-17 22:15:25,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=38, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-17 22:15:25,470 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632125469"}]},"ts":"1689632125469"} 2023-07-17 22:15:25,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-17 22:15:25,472 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-17 22:15:25,475 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-17 22:15:25,479 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49a1607434cef687a7711e7408b388a5, UNASSIGN}, {pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bf428dad48a669ab21949b9996b2be5, UNASSIGN}, {pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a63aa27d98a8136d1a23449c4d291ee1, UNASSIGN}, {pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=906632069deb4933b70a391e875e60d7, UNASSIGN}, {pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=21fec2fd58f7374b6652b67ebdd179a4, UNASSIGN}] 2023-07-17 22:15:25,481 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bf428dad48a669ab21949b9996b2be5, UNASSIGN 2023-07-17 22:15:25,481 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49a1607434cef687a7711e7408b388a5, UNASSIGN 2023-07-17 22:15:25,481 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a63aa27d98a8136d1a23449c4d291ee1, UNASSIGN 2023-07-17 22:15:25,482 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=21fec2fd58f7374b6652b67ebdd179a4, UNASSIGN 2023-07-17 22:15:25,482 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=906632069deb4933b70a391e875e60d7, UNASSIGN 2023-07-17 22:15:25,483 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=9bf428dad48a669ab21949b9996b2be5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:25,483 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632125482"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632125482"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632125482"}]},"ts":"1689632125482"} 2023-07-17 22:15:25,483 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=49a1607434cef687a7711e7408b388a5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:25,483 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632125483"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632125483"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632125483"}]},"ts":"1689632125483"} 2023-07-17 22:15:25,483 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=a63aa27d98a8136d1a23449c4d291ee1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:25,484 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632125483"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632125483"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632125483"}]},"ts":"1689632125483"} 2023-07-17 22:15:25,486 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=906632069deb4933b70a391e875e60d7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:25,486 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=21fec2fd58f7374b6652b67ebdd179a4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:25,486 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689632123131.906632069deb4933b70a391e875e60d7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632125486"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632125486"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632125486"}]},"ts":"1689632125486"} 2023-07-17 22:15:25,486 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632125486"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632125486"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632125486"}]},"ts":"1689632125486"} 2023-07-17 22:15:25,488 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=40, state=RUNNABLE; CloseRegionProcedure 9bf428dad48a669ab21949b9996b2be5, server=jenkins-hbase4.apache.org,34803,1689632122825}] 2023-07-17 22:15:25,490 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=39, state=RUNNABLE; CloseRegionProcedure 49a1607434cef687a7711e7408b388a5, server=jenkins-hbase4.apache.org,34803,1689632122825}] 2023-07-17 22:15:25,492 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=41, state=RUNNABLE; CloseRegionProcedure a63aa27d98a8136d1a23449c4d291ee1, server=jenkins-hbase4.apache.org,34803,1689632122825}] 2023-07-17 22:15:25,494 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=42, state=RUNNABLE; CloseRegionProcedure 906632069deb4933b70a391e875e60d7, server=jenkins-hbase4.apache.org,34647,1689632118064}] 2023-07-17 22:15:25,495 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=43, state=RUNNABLE; CloseRegionProcedure 21fec2fd58f7374b6652b67ebdd179a4, server=jenkins-hbase4.apache.org,34803,1689632122825}] 2023-07-17 22:15:25,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-17 22:15:25,645 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:25,647 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9bf428dad48a669ab21949b9996b2be5, disabling compactions & flushes 2023-07-17 22:15:25,647 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. 2023-07-17 22:15:25,647 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. 2023-07-17 22:15:25,647 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. after waiting 0 ms 2023-07-17 22:15:25,647 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. 2023-07-17 22:15:25,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 22:15:25,653 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5. 2023-07-17 22:15:25,653 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9bf428dad48a669ab21949b9996b2be5: 2023-07-17 22:15:25,655 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 906632069deb4933b70a391e875e60d7 2023-07-17 22:15:25,656 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 906632069deb4933b70a391e875e60d7, disabling compactions & flushes 2023-07-17 22:15:25,656 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. 2023-07-17 22:15:25,656 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. 2023-07-17 22:15:25,656 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:25,656 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. after waiting 0 ms 2023-07-17 22:15:25,657 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:25,657 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. 2023-07-17 22:15:25,658 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=9bf428dad48a669ab21949b9996b2be5, regionState=CLOSED 2023-07-17 22:15:25,657 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 21fec2fd58f7374b6652b67ebdd179a4, disabling compactions & flushes 2023-07-17 22:15:25,659 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. 2023-07-17 22:15:25,659 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632125658"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632125658"}]},"ts":"1689632125658"} 2023-07-17 22:15:25,659 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. 2023-07-17 22:15:25,659 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. after waiting 0 ms 2023-07-17 22:15:25,659 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. 2023-07-17 22:15:25,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 22:15:25,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4. 2023-07-17 22:15:25,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 21fec2fd58f7374b6652b67ebdd179a4: 2023-07-17 22:15:25,666 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=40 2023-07-17 22:15:25,666 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=40, state=SUCCESS; CloseRegionProcedure 9bf428dad48a669ab21949b9996b2be5, server=jenkins-hbase4.apache.org,34803,1689632122825 in 173 msec 2023-07-17 22:15:25,668 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bf428dad48a669ab21949b9996b2be5, UNASSIGN in 188 msec 2023-07-17 22:15:25,668 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=21fec2fd58f7374b6652b67ebdd179a4, regionState=CLOSED 2023-07-17 22:15:25,669 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632125668"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632125668"}]},"ts":"1689632125668"} 2023-07-17 22:15:25,687 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:25,687 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:25,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 49a1607434cef687a7711e7408b388a5, disabling compactions & flushes 2023-07-17 22:15:25,688 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. 2023-07-17 22:15:25,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. 2023-07-17 22:15:25,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. after waiting 0 ms 2023-07-17 22:15:25,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. 2023-07-17 22:15:25,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 22:15:25,691 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7. 2023-07-17 22:15:25,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 906632069deb4933b70a391e875e60d7: 2023-07-17 22:15:25,696 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 906632069deb4933b70a391e875e60d7 2023-07-17 22:15:25,697 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 22:15:25,697 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=43 2023-07-17 22:15:25,698 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=906632069deb4933b70a391e875e60d7, regionState=CLOSED 2023-07-17 22:15:25,698 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=43, state=SUCCESS; CloseRegionProcedure 21fec2fd58f7374b6652b67ebdd179a4, server=jenkins-hbase4.apache.org,34803,1689632122825 in 193 msec 2023-07-17 22:15:25,698 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689632123131.906632069deb4933b70a391e875e60d7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632125698"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632125698"}]},"ts":"1689632125698"} 2023-07-17 22:15:25,698 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5. 2023-07-17 22:15:25,698 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 49a1607434cef687a7711e7408b388a5: 2023-07-17 22:15:25,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:25,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:25,701 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=21fec2fd58f7374b6652b67ebdd179a4, UNASSIGN in 220 msec 2023-07-17 22:15:25,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a63aa27d98a8136d1a23449c4d291ee1, disabling compactions & flushes 2023-07-17 22:15:25,701 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. 2023-07-17 22:15:25,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. 2023-07-17 22:15:25,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. after waiting 0 ms 2023-07-17 22:15:25,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. 2023-07-17 22:15:25,702 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=49a1607434cef687a7711e7408b388a5, regionState=CLOSED 2023-07-17 22:15:25,703 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632125702"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632125702"}]},"ts":"1689632125702"} 2023-07-17 22:15:25,704 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=42 2023-07-17 22:15:25,704 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=42, state=SUCCESS; CloseRegionProcedure 906632069deb4933b70a391e875e60d7, server=jenkins-hbase4.apache.org,34647,1689632118064 in 207 msec 2023-07-17 22:15:25,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 22:15:25,709 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1. 2023-07-17 22:15:25,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a63aa27d98a8136d1a23449c4d291ee1: 2023-07-17 22:15:25,711 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=906632069deb4933b70a391e875e60d7, UNASSIGN in 226 msec 2023-07-17 22:15:25,712 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=39 2023-07-17 22:15:25,712 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=39, state=SUCCESS; CloseRegionProcedure 49a1607434cef687a7711e7408b388a5, server=jenkins-hbase4.apache.org,34803,1689632122825 in 215 msec 2023-07-17 22:15:25,712 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=a63aa27d98a8136d1a23449c4d291ee1, regionState=CLOSED 2023-07-17 22:15:25,712 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632125712"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632125712"}]},"ts":"1689632125712"} 2023-07-17 22:15:25,714 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49a1607434cef687a7711e7408b388a5, UNASSIGN in 234 msec 2023-07-17 22:15:25,714 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:25,718 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=41 2023-07-17 22:15:25,718 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=41, state=SUCCESS; CloseRegionProcedure a63aa27d98a8136d1a23449c4d291ee1, server=jenkins-hbase4.apache.org,34803,1689632122825 in 223 msec 2023-07-17 22:15:25,724 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=38 2023-07-17 22:15:25,724 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a63aa27d98a8136d1a23449c4d291ee1, UNASSIGN in 240 msec 2023-07-17 22:15:25,726 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632125725"}]},"ts":"1689632125725"} 2023-07-17 22:15:25,727 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-17 22:15:25,729 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-17 22:15:25,732 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=38, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 273 msec 2023-07-17 22:15:25,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-17 22:15:25,775 INFO [Listener at localhost/37695] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 38 completed 2023-07-17 22:15:25,777 INFO [Listener at localhost/37695] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-17 22:15:25,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-17 22:15:25,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=49, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-17 22:15:25,803 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-17 22:15:25,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-17 22:15:25,823 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:25,825 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:25,825 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:25,825 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:25,825 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7 2023-07-17 22:15:25,832 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5/f, FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5/recovered.edits] 2023-07-17 22:15:25,832 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4/f, FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4/recovered.edits] 2023-07-17 22:15:25,832 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5/f, FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5/recovered.edits] 2023-07-17 22:15:25,833 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1/f, FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1/recovered.edits] 2023-07-17 22:15:25,833 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7/f, FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7/recovered.edits] 2023-07-17 22:15:25,858 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5/recovered.edits/7.seqid to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/archive/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5/recovered.edits/7.seqid 2023-07-17 22:15:25,858 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7/recovered.edits/7.seqid to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/archive/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7/recovered.edits/7.seqid 2023-07-17 22:15:25,859 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5/recovered.edits/7.seqid to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/archive/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5/recovered.edits/7.seqid 2023-07-17 22:15:25,859 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49a1607434cef687a7711e7408b388a5 2023-07-17 22:15:25,860 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/906632069deb4933b70a391e875e60d7 2023-07-17 22:15:25,860 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9bf428dad48a669ab21949b9996b2be5 2023-07-17 22:15:25,862 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1/recovered.edits/7.seqid to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/archive/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1/recovered.edits/7.seqid 2023-07-17 22:15:25,863 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4/recovered.edits/7.seqid to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/archive/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4/recovered.edits/7.seqid 2023-07-17 22:15:25,863 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a63aa27d98a8136d1a23449c4d291ee1 2023-07-17 22:15:25,864 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/21fec2fd58f7374b6652b67ebdd179a4 2023-07-17 22:15:25,864 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-17 22:15:25,903 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-17 22:15:25,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-17 22:15:25,908 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-17 22:15:25,909 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-17 22:15:25,909 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632125909"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:25,909 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632125909"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:25,909 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632125909"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:25,909 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689632123131.906632069deb4933b70a391e875e60d7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632125909"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:25,910 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632125909"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:25,913 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-17 22:15:25,913 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 49a1607434cef687a7711e7408b388a5, NAME => 'Group_testTableMoveTruncateAndDrop,,1689632123131.49a1607434cef687a7711e7408b388a5.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 9bf428dad48a669ab21949b9996b2be5, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689632123131.9bf428dad48a669ab21949b9996b2be5.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => a63aa27d98a8136d1a23449c4d291ee1, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632123131.a63aa27d98a8136d1a23449c4d291ee1.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 906632069deb4933b70a391e875e60d7, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632123131.906632069deb4933b70a391e875e60d7.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 21fec2fd58f7374b6652b67ebdd179a4, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689632123131.21fec2fd58f7374b6652b67ebdd179a4.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-17 22:15:25,913 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-17 22:15:25,914 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689632125914"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:25,922 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-17 22:15:25,935 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/86a112363b152821b3a882e4e7eedfdd 2023-07-17 22:15:25,935 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/be20919cf8011c405cd066beebb95f34 2023-07-17 22:15:25,935 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e9c1589dae6e22017ddc1054e81ae278 2023-07-17 22:15:25,935 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4fbae447ff09475573782922ea60fe68 2023-07-17 22:15:25,935 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d15d9ffaa4279b559a3d4179f5cdd9d2 2023-07-17 22:15:25,936 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/be20919cf8011c405cd066beebb95f34 empty. 2023-07-17 22:15:25,936 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4fbae447ff09475573782922ea60fe68 empty. 2023-07-17 22:15:25,937 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e9c1589dae6e22017ddc1054e81ae278 empty. 2023-07-17 22:15:25,937 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/86a112363b152821b3a882e4e7eedfdd empty. 2023-07-17 22:15:25,937 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d15d9ffaa4279b559a3d4179f5cdd9d2 empty. 2023-07-17 22:15:25,937 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/be20919cf8011c405cd066beebb95f34 2023-07-17 22:15:25,937 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4fbae447ff09475573782922ea60fe68 2023-07-17 22:15:25,938 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e9c1589dae6e22017ddc1054e81ae278 2023-07-17 22:15:25,938 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/86a112363b152821b3a882e4e7eedfdd 2023-07-17 22:15:25,938 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d15d9ffaa4279b559a3d4179f5cdd9d2 2023-07-17 22:15:25,938 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-17 22:15:25,987 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:25,989 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 86a112363b152821b3a882e4e7eedfdd, NAME => 'Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:25,990 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => d15d9ffaa4279b559a3d4179f5cdd9d2, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:25,991 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => e9c1589dae6e22017ddc1054e81ae278, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:26,025 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:26,025 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 86a112363b152821b3a882e4e7eedfdd, disabling compactions & flushes 2023-07-17 22:15:26,025 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd. 2023-07-17 22:15:26,025 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd. 2023-07-17 22:15:26,025 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd. after waiting 0 ms 2023-07-17 22:15:26,025 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd. 2023-07-17 22:15:26,025 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd. 2023-07-17 22:15:26,025 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 86a112363b152821b3a882e4e7eedfdd: 2023-07-17 22:15:26,025 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4fbae447ff09475573782922ea60fe68, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632125866.4fbae447ff09475573782922ea60fe68.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:26,043 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:26,043 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:26,043 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing e9c1589dae6e22017ddc1054e81ae278, disabling compactions & flushes 2023-07-17 22:15:26,043 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing d15d9ffaa4279b559a3d4179f5cdd9d2, disabling compactions & flushes 2023-07-17 22:15:26,043 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278. 2023-07-17 22:15:26,043 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2. 2023-07-17 22:15:26,043 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278. 2023-07-17 22:15:26,044 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2. 2023-07-17 22:15:26,044 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278. after waiting 0 ms 2023-07-17 22:15:26,044 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2. after waiting 0 ms 2023-07-17 22:15:26,044 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278. 2023-07-17 22:15:26,044 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2. 2023-07-17 22:15:26,044 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278. 2023-07-17 22:15:26,044 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2. 2023-07-17 22:15:26,044 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for e9c1589dae6e22017ddc1054e81ae278: 2023-07-17 22:15:26,044 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for d15d9ffaa4279b559a3d4179f5cdd9d2: 2023-07-17 22:15:26,045 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => be20919cf8011c405cd066beebb95f34, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:26,049 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-17 22:15:26,050 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-17 22:15:26,051 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-17 22:15:26,051 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-17 22:15:26,051 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-17 22:15:26,051 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-17 22:15:26,053 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632125866.4fbae447ff09475573782922ea60fe68.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:26,053 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 4fbae447ff09475573782922ea60fe68, disabling compactions & flushes 2023-07-17 22:15:26,054 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632125866.4fbae447ff09475573782922ea60fe68. 2023-07-17 22:15:26,054 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632125866.4fbae447ff09475573782922ea60fe68. 2023-07-17 22:15:26,054 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632125866.4fbae447ff09475573782922ea60fe68. after waiting 0 ms 2023-07-17 22:15:26,054 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632125866.4fbae447ff09475573782922ea60fe68. 2023-07-17 22:15:26,054 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632125866.4fbae447ff09475573782922ea60fe68. 2023-07-17 22:15:26,054 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 4fbae447ff09475573782922ea60fe68: 2023-07-17 22:15:26,060 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:26,060 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing be20919cf8011c405cd066beebb95f34, disabling compactions & flushes 2023-07-17 22:15:26,060 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34. 2023-07-17 22:15:26,060 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34. 2023-07-17 22:15:26,060 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34. after waiting 0 ms 2023-07-17 22:15:26,060 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34. 2023-07-17 22:15:26,060 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34. 2023-07-17 22:15:26,060 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for be20919cf8011c405cd066beebb95f34: 2023-07-17 22:15:26,064 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632126064"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632126064"}]},"ts":"1689632126064"} 2023-07-17 22:15:26,064 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632126064"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632126064"}]},"ts":"1689632126064"} 2023-07-17 22:15:26,064 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632126064"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632126064"}]},"ts":"1689632126064"} 2023-07-17 22:15:26,065 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689632125866.4fbae447ff09475573782922ea60fe68.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632126064"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632126064"}]},"ts":"1689632126064"} 2023-07-17 22:15:26,065 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632126064"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632126064"}]},"ts":"1689632126064"} 2023-07-17 22:15:26,072 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-17 22:15:26,073 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632126073"}]},"ts":"1689632126073"} 2023-07-17 22:15:26,075 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-17 22:15:26,080 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:26,081 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:26,081 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:26,081 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:26,083 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=86a112363b152821b3a882e4e7eedfdd, ASSIGN}, {pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d15d9ffaa4279b559a3d4179f5cdd9d2, ASSIGN}, {pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e9c1589dae6e22017ddc1054e81ae278, ASSIGN}, {pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4fbae447ff09475573782922ea60fe68, ASSIGN}, {pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=be20919cf8011c405cd066beebb95f34, ASSIGN}] 2023-07-17 22:15:26,086 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e9c1589dae6e22017ddc1054e81ae278, ASSIGN 2023-07-17 22:15:26,086 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d15d9ffaa4279b559a3d4179f5cdd9d2, ASSIGN 2023-07-17 22:15:26,086 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=86a112363b152821b3a882e4e7eedfdd, ASSIGN 2023-07-17 22:15:26,086 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4fbae447ff09475573782922ea60fe68, ASSIGN 2023-07-17 22:15:26,087 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=be20919cf8011c405cd066beebb95f34, ASSIGN 2023-07-17 22:15:26,089 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e9c1589dae6e22017ddc1054e81ae278, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34647,1689632118064; forceNewPlan=false, retain=false 2023-07-17 22:15:26,090 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d15d9ffaa4279b559a3d4179f5cdd9d2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34803,1689632122825; forceNewPlan=false, retain=false 2023-07-17 22:15:26,090 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4fbae447ff09475573782922ea60fe68, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34647,1689632118064; forceNewPlan=false, retain=false 2023-07-17 22:15:26,090 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=86a112363b152821b3a882e4e7eedfdd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34803,1689632122825; forceNewPlan=false, retain=false 2023-07-17 22:15:26,091 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=be20919cf8011c405cd066beebb95f34, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34803,1689632122825; forceNewPlan=false, retain=false 2023-07-17 22:15:26,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-17 22:15:26,240 INFO [jenkins-hbase4:43315] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-17 22:15:26,244 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=d15d9ffaa4279b559a3d4179f5cdd9d2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:26,244 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=4fbae447ff09475573782922ea60fe68, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:26,244 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632126244"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632126244"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632126244"}]},"ts":"1689632126244"} 2023-07-17 22:15:26,244 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=be20919cf8011c405cd066beebb95f34, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:26,244 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689632125866.4fbae447ff09475573782922ea60fe68.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632126244"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632126244"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632126244"}]},"ts":"1689632126244"} 2023-07-17 22:15:26,244 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=86a112363b152821b3a882e4e7eedfdd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:26,244 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=e9c1589dae6e22017ddc1054e81ae278, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:26,244 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632126244"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632126244"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632126244"}]},"ts":"1689632126244"} 2023-07-17 22:15:26,245 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632126244"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632126244"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632126244"}]},"ts":"1689632126244"} 2023-07-17 22:15:26,244 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632126244"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632126244"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632126244"}]},"ts":"1689632126244"} 2023-07-17 22:15:26,246 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=51, state=RUNNABLE; OpenRegionProcedure d15d9ffaa4279b559a3d4179f5cdd9d2, server=jenkins-hbase4.apache.org,34803,1689632122825}] 2023-07-17 22:15:26,247 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=53, state=RUNNABLE; OpenRegionProcedure 4fbae447ff09475573782922ea60fe68, server=jenkins-hbase4.apache.org,34647,1689632118064}] 2023-07-17 22:15:26,249 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=50, state=RUNNABLE; OpenRegionProcedure 86a112363b152821b3a882e4e7eedfdd, server=jenkins-hbase4.apache.org,34803,1689632122825}] 2023-07-17 22:15:26,250 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=52, state=RUNNABLE; OpenRegionProcedure e9c1589dae6e22017ddc1054e81ae278, server=jenkins-hbase4.apache.org,34647,1689632118064}] 2023-07-17 22:15:26,251 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=54, state=RUNNABLE; OpenRegionProcedure be20919cf8011c405cd066beebb95f34, server=jenkins-hbase4.apache.org,34803,1689632122825}] 2023-07-17 22:15:26,407 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd. 2023-07-17 22:15:26,407 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278. 2023-07-17 22:15:26,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 86a112363b152821b3a882e4e7eedfdd, NAME => 'Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-17 22:15:26,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e9c1589dae6e22017ddc1054e81ae278, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-17 22:15:26,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 86a112363b152821b3a882e4e7eedfdd 2023-07-17 22:15:26,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e9c1589dae6e22017ddc1054e81ae278 2023-07-17 22:15:26,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:26,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:26,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 86a112363b152821b3a882e4e7eedfdd 2023-07-17 22:15:26,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 86a112363b152821b3a882e4e7eedfdd 2023-07-17 22:15:26,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e9c1589dae6e22017ddc1054e81ae278 2023-07-17 22:15:26,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e9c1589dae6e22017ddc1054e81ae278 2023-07-17 22:15:26,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-17 22:15:26,413 INFO [StoreOpener-86a112363b152821b3a882e4e7eedfdd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 86a112363b152821b3a882e4e7eedfdd 2023-07-17 22:15:26,414 INFO [StoreOpener-e9c1589dae6e22017ddc1054e81ae278-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e9c1589dae6e22017ddc1054e81ae278 2023-07-17 22:15:26,418 DEBUG [StoreOpener-e9c1589dae6e22017ddc1054e81ae278-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/e9c1589dae6e22017ddc1054e81ae278/f 2023-07-17 22:15:26,418 DEBUG [StoreOpener-e9c1589dae6e22017ddc1054e81ae278-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/e9c1589dae6e22017ddc1054e81ae278/f 2023-07-17 22:15:26,418 DEBUG [StoreOpener-86a112363b152821b3a882e4e7eedfdd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/86a112363b152821b3a882e4e7eedfdd/f 2023-07-17 22:15:26,418 DEBUG [StoreOpener-86a112363b152821b3a882e4e7eedfdd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/86a112363b152821b3a882e4e7eedfdd/f 2023-07-17 22:15:26,418 INFO [StoreOpener-e9c1589dae6e22017ddc1054e81ae278-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e9c1589dae6e22017ddc1054e81ae278 columnFamilyName f 2023-07-17 22:15:26,419 INFO [StoreOpener-86a112363b152821b3a882e4e7eedfdd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 86a112363b152821b3a882e4e7eedfdd columnFamilyName f 2023-07-17 22:15:26,420 INFO [StoreOpener-e9c1589dae6e22017ddc1054e81ae278-1] regionserver.HStore(310): Store=e9c1589dae6e22017ddc1054e81ae278/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:26,420 INFO [StoreOpener-86a112363b152821b3a882e4e7eedfdd-1] regionserver.HStore(310): Store=86a112363b152821b3a882e4e7eedfdd/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:26,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/e9c1589dae6e22017ddc1054e81ae278 2023-07-17 22:15:26,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/86a112363b152821b3a882e4e7eedfdd 2023-07-17 22:15:26,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/e9c1589dae6e22017ddc1054e81ae278 2023-07-17 22:15:26,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/86a112363b152821b3a882e4e7eedfdd 2023-07-17 22:15:26,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e9c1589dae6e22017ddc1054e81ae278 2023-07-17 22:15:26,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 86a112363b152821b3a882e4e7eedfdd 2023-07-17 22:15:26,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/e9c1589dae6e22017ddc1054e81ae278/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:26,445 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/86a112363b152821b3a882e4e7eedfdd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:26,445 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 86a112363b152821b3a882e4e7eedfdd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10210806080, jitterRate=-0.04904457926750183}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:26,445 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 86a112363b152821b3a882e4e7eedfdd: 2023-07-17 22:15:26,447 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e9c1589dae6e22017ddc1054e81ae278; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10701409600, jitterRate=-0.0033535659313201904}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:26,447 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e9c1589dae6e22017ddc1054e81ae278: 2023-07-17 22:15:26,447 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd., pid=57, masterSystemTime=1689632126400 2023-07-17 22:15:26,449 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278., pid=58, masterSystemTime=1689632126400 2023-07-17 22:15:26,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd. 2023-07-17 22:15:26,450 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd. 2023-07-17 22:15:26,450 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34. 2023-07-17 22:15:26,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => be20919cf8011c405cd066beebb95f34, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-17 22:15:26,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop be20919cf8011c405cd066beebb95f34 2023-07-17 22:15:26,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:26,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for be20919cf8011c405cd066beebb95f34 2023-07-17 22:15:26,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for be20919cf8011c405cd066beebb95f34 2023-07-17 22:15:26,451 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=86a112363b152821b3a882e4e7eedfdd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:26,451 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632126451"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632126451"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632126451"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632126451"}]},"ts":"1689632126451"} 2023-07-17 22:15:26,452 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278. 2023-07-17 22:15:26,452 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278. 2023-07-17 22:15:26,452 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632125866.4fbae447ff09475573782922ea60fe68. 2023-07-17 22:15:26,452 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4fbae447ff09475573782922ea60fe68, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632125866.4fbae447ff09475573782922ea60fe68.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-17 22:15:26,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 4fbae447ff09475573782922ea60fe68 2023-07-17 22:15:26,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632125866.4fbae447ff09475573782922ea60fe68.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:26,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4fbae447ff09475573782922ea60fe68 2023-07-17 22:15:26,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4fbae447ff09475573782922ea60fe68 2023-07-17 22:15:26,459 INFO [StoreOpener-4fbae447ff09475573782922ea60fe68-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4fbae447ff09475573782922ea60fe68 2023-07-17 22:15:26,459 INFO [StoreOpener-be20919cf8011c405cd066beebb95f34-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region be20919cf8011c405cd066beebb95f34 2023-07-17 22:15:26,459 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=e9c1589dae6e22017ddc1054e81ae278, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:26,460 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632126459"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632126459"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632126459"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632126459"}]},"ts":"1689632126459"} 2023-07-17 22:15:26,465 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=50 2023-07-17 22:15:26,465 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=50, state=SUCCESS; OpenRegionProcedure 86a112363b152821b3a882e4e7eedfdd, server=jenkins-hbase4.apache.org,34803,1689632122825 in 205 msec 2023-07-17 22:15:26,466 DEBUG [StoreOpener-be20919cf8011c405cd066beebb95f34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/be20919cf8011c405cd066beebb95f34/f 2023-07-17 22:15:26,466 DEBUG [StoreOpener-be20919cf8011c405cd066beebb95f34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/be20919cf8011c405cd066beebb95f34/f 2023-07-17 22:15:26,466 DEBUG [StoreOpener-4fbae447ff09475573782922ea60fe68-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/4fbae447ff09475573782922ea60fe68/f 2023-07-17 22:15:26,466 DEBUG [StoreOpener-4fbae447ff09475573782922ea60fe68-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/4fbae447ff09475573782922ea60fe68/f 2023-07-17 22:15:26,467 INFO [StoreOpener-be20919cf8011c405cd066beebb95f34-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region be20919cf8011c405cd066beebb95f34 columnFamilyName f 2023-07-17 22:15:26,467 INFO [StoreOpener-4fbae447ff09475573782922ea60fe68-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4fbae447ff09475573782922ea60fe68 columnFamilyName f 2023-07-17 22:15:26,468 INFO [StoreOpener-be20919cf8011c405cd066beebb95f34-1] regionserver.HStore(310): Store=be20919cf8011c405cd066beebb95f34/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:26,469 INFO [StoreOpener-4fbae447ff09475573782922ea60fe68-1] regionserver.HStore(310): Store=4fbae447ff09475573782922ea60fe68/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:26,469 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=86a112363b152821b3a882e4e7eedfdd, ASSIGN in 384 msec 2023-07-17 22:15:26,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/be20919cf8011c405cd066beebb95f34 2023-07-17 22:15:26,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/4fbae447ff09475573782922ea60fe68 2023-07-17 22:15:26,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/4fbae447ff09475573782922ea60fe68 2023-07-17 22:15:26,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/be20919cf8011c405cd066beebb95f34 2023-07-17 22:15:26,471 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=52 2023-07-17 22:15:26,471 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=52, state=SUCCESS; OpenRegionProcedure e9c1589dae6e22017ddc1054e81ae278, server=jenkins-hbase4.apache.org,34647,1689632118064 in 217 msec 2023-07-17 22:15:26,477 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e9c1589dae6e22017ddc1054e81ae278, ASSIGN in 388 msec 2023-07-17 22:15:26,477 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for be20919cf8011c405cd066beebb95f34 2023-07-17 22:15:26,480 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4fbae447ff09475573782922ea60fe68 2023-07-17 22:15:26,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/be20919cf8011c405cd066beebb95f34/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:26,482 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened be20919cf8011c405cd066beebb95f34; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10537127840, jitterRate=-0.018653497099876404}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:26,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for be20919cf8011c405cd066beebb95f34: 2023-07-17 22:15:26,483 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34., pid=59, masterSystemTime=1689632126400 2023-07-17 22:15:26,484 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/4fbae447ff09475573782922ea60fe68/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:26,485 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4fbae447ff09475573782922ea60fe68; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9583851040, jitterRate=-0.10743431746959686}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:26,485 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4fbae447ff09475573782922ea60fe68: 2023-07-17 22:15:26,486 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632125866.4fbae447ff09475573782922ea60fe68., pid=56, masterSystemTime=1689632126400 2023-07-17 22:15:26,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34. 2023-07-17 22:15:26,487 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34. 2023-07-17 22:15:26,487 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2. 2023-07-17 22:15:26,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d15d9ffaa4279b559a3d4179f5cdd9d2, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-17 22:15:26,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d15d9ffaa4279b559a3d4179f5cdd9d2 2023-07-17 22:15:26,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:26,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d15d9ffaa4279b559a3d4179f5cdd9d2 2023-07-17 22:15:26,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d15d9ffaa4279b559a3d4179f5cdd9d2 2023-07-17 22:15:26,487 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=be20919cf8011c405cd066beebb95f34, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:26,488 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632126487"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632126487"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632126487"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632126487"}]},"ts":"1689632126487"} 2023-07-17 22:15:26,488 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632125866.4fbae447ff09475573782922ea60fe68. 2023-07-17 22:15:26,488 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632125866.4fbae447ff09475573782922ea60fe68. 2023-07-17 22:15:26,489 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=4fbae447ff09475573782922ea60fe68, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:26,490 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689632125866.4fbae447ff09475573782922ea60fe68.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632126489"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632126489"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632126489"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632126489"}]},"ts":"1689632126489"} 2023-07-17 22:15:26,490 INFO [StoreOpener-d15d9ffaa4279b559a3d4179f5cdd9d2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d15d9ffaa4279b559a3d4179f5cdd9d2 2023-07-17 22:15:26,494 DEBUG [StoreOpener-d15d9ffaa4279b559a3d4179f5cdd9d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/d15d9ffaa4279b559a3d4179f5cdd9d2/f 2023-07-17 22:15:26,494 DEBUG [StoreOpener-d15d9ffaa4279b559a3d4179f5cdd9d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/d15d9ffaa4279b559a3d4179f5cdd9d2/f 2023-07-17 22:15:26,494 INFO [StoreOpener-d15d9ffaa4279b559a3d4179f5cdd9d2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d15d9ffaa4279b559a3d4179f5cdd9d2 columnFamilyName f 2023-07-17 22:15:26,495 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=54 2023-07-17 22:15:26,495 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=54, state=SUCCESS; OpenRegionProcedure be20919cf8011c405cd066beebb95f34, server=jenkins-hbase4.apache.org,34803,1689632122825 in 240 msec 2023-07-17 22:15:26,495 INFO [StoreOpener-d15d9ffaa4279b559a3d4179f5cdd9d2-1] regionserver.HStore(310): Store=d15d9ffaa4279b559a3d4179f5cdd9d2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:26,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/d15d9ffaa4279b559a3d4179f5cdd9d2 2023-07-17 22:15:26,497 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=53 2023-07-17 22:15:26,497 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=53, state=SUCCESS; OpenRegionProcedure 4fbae447ff09475573782922ea60fe68, server=jenkins-hbase4.apache.org,34647,1689632118064 in 246 msec 2023-07-17 22:15:26,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/d15d9ffaa4279b559a3d4179f5cdd9d2 2023-07-17 22:15:26,497 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=be20919cf8011c405cd066beebb95f34, ASSIGN in 412 msec 2023-07-17 22:15:26,499 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4fbae447ff09475573782922ea60fe68, ASSIGN in 414 msec 2023-07-17 22:15:26,501 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d15d9ffaa4279b559a3d4179f5cdd9d2 2023-07-17 22:15:26,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/d15d9ffaa4279b559a3d4179f5cdd9d2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:26,504 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d15d9ffaa4279b559a3d4179f5cdd9d2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9814989760, jitterRate=-0.08590784668922424}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:26,504 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d15d9ffaa4279b559a3d4179f5cdd9d2: 2023-07-17 22:15:26,505 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2., pid=55, masterSystemTime=1689632126400 2023-07-17 22:15:26,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2. 2023-07-17 22:15:26,507 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2. 2023-07-17 22:15:26,508 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=d15d9ffaa4279b559a3d4179f5cdd9d2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:26,508 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632126508"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632126508"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632126508"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632126508"}]},"ts":"1689632126508"} 2023-07-17 22:15:26,513 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=51 2023-07-17 22:15:26,514 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=51, state=SUCCESS; OpenRegionProcedure d15d9ffaa4279b559a3d4179f5cdd9d2, server=jenkins-hbase4.apache.org,34803,1689632122825 in 265 msec 2023-07-17 22:15:26,516 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=49 2023-07-17 22:15:26,516 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d15d9ffaa4279b559a3d4179f5cdd9d2, ASSIGN in 433 msec 2023-07-17 22:15:26,516 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632126516"}]},"ts":"1689632126516"} 2023-07-17 22:15:26,518 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-17 22:15:26,520 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-17 22:15:26,522 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=49, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 735 msec 2023-07-17 22:15:26,710 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-17 22:15:26,790 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testTableMoveTruncateAndDrop' 2023-07-17 22:15:26,792 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-17 22:15:26,793 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-17 22:15:26,794 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-17 22:15:26,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-17 22:15:26,913 INFO [Listener at localhost/37695] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 49 completed 2023-07-17 22:15:26,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:26,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:26,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:26,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:26,917 INFO [Listener at localhost/37695] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-17 22:15:26,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-17 22:15:26,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=60, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-17 22:15:26,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-17 22:15:26,923 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632126922"}]},"ts":"1689632126922"} 2023-07-17 22:15:26,924 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-17 22:15:26,926 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-17 22:15:26,929 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=86a112363b152821b3a882e4e7eedfdd, UNASSIGN}, {pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d15d9ffaa4279b559a3d4179f5cdd9d2, UNASSIGN}, {pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e9c1589dae6e22017ddc1054e81ae278, UNASSIGN}, {pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4fbae447ff09475573782922ea60fe68, UNASSIGN}, {pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=be20919cf8011c405cd066beebb95f34, UNASSIGN}] 2023-07-17 22:15:26,931 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4fbae447ff09475573782922ea60fe68, UNASSIGN 2023-07-17 22:15:26,931 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d15d9ffaa4279b559a3d4179f5cdd9d2, UNASSIGN 2023-07-17 22:15:26,932 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=be20919cf8011c405cd066beebb95f34, UNASSIGN 2023-07-17 22:15:26,932 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e9c1589dae6e22017ddc1054e81ae278, UNASSIGN 2023-07-17 22:15:26,932 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=86a112363b152821b3a882e4e7eedfdd, UNASSIGN 2023-07-17 22:15:26,934 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=4fbae447ff09475573782922ea60fe68, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:26,934 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=be20919cf8011c405cd066beebb95f34, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:26,934 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689632125866.4fbae447ff09475573782922ea60fe68.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632126934"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632126934"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632126934"}]},"ts":"1689632126934"} 2023-07-17 22:15:26,934 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632126934"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632126934"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632126934"}]},"ts":"1689632126934"} 2023-07-17 22:15:26,935 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=d15d9ffaa4279b559a3d4179f5cdd9d2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:26,935 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=e9c1589dae6e22017ddc1054e81ae278, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:26,935 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632126934"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632126934"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632126934"}]},"ts":"1689632126934"} 2023-07-17 22:15:26,935 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632126935"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632126935"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632126935"}]},"ts":"1689632126935"} 2023-07-17 22:15:26,935 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=86a112363b152821b3a882e4e7eedfdd, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:26,935 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632126935"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632126935"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632126935"}]},"ts":"1689632126935"} 2023-07-17 22:15:26,937 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=64, state=RUNNABLE; CloseRegionProcedure 4fbae447ff09475573782922ea60fe68, server=jenkins-hbase4.apache.org,34647,1689632118064}] 2023-07-17 22:15:26,938 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=65, state=RUNNABLE; CloseRegionProcedure be20919cf8011c405cd066beebb95f34, server=jenkins-hbase4.apache.org,34803,1689632122825}] 2023-07-17 22:15:26,939 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=62, state=RUNNABLE; CloseRegionProcedure d15d9ffaa4279b559a3d4179f5cdd9d2, server=jenkins-hbase4.apache.org,34803,1689632122825}] 2023-07-17 22:15:26,940 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=63, state=RUNNABLE; CloseRegionProcedure e9c1589dae6e22017ddc1054e81ae278, server=jenkins-hbase4.apache.org,34647,1689632118064}] 2023-07-17 22:15:26,941 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=61, state=RUNNABLE; CloseRegionProcedure 86a112363b152821b3a882e4e7eedfdd, server=jenkins-hbase4.apache.org,34803,1689632122825}] 2023-07-17 22:15:27,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-17 22:15:27,094 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e9c1589dae6e22017ddc1054e81ae278 2023-07-17 22:15:27,094 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 86a112363b152821b3a882e4e7eedfdd 2023-07-17 22:15:27,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 86a112363b152821b3a882e4e7eedfdd, disabling compactions & flushes 2023-07-17 22:15:27,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e9c1589dae6e22017ddc1054e81ae278, disabling compactions & flushes 2023-07-17 22:15:27,095 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd. 2023-07-17 22:15:27,095 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278. 2023-07-17 22:15:27,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd. 2023-07-17 22:15:27,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278. 2023-07-17 22:15:27,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd. after waiting 0 ms 2023-07-17 22:15:27,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278. after waiting 0 ms 2023-07-17 22:15:27,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd. 2023-07-17 22:15:27,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278. 2023-07-17 22:15:27,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/e9c1589dae6e22017ddc1054e81ae278/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:27,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/86a112363b152821b3a882e4e7eedfdd/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:27,108 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278. 2023-07-17 22:15:27,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e9c1589dae6e22017ddc1054e81ae278: 2023-07-17 22:15:27,109 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd. 2023-07-17 22:15:27,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 86a112363b152821b3a882e4e7eedfdd: 2023-07-17 22:15:27,111 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e9c1589dae6e22017ddc1054e81ae278 2023-07-17 22:15:27,111 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4fbae447ff09475573782922ea60fe68 2023-07-17 22:15:27,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4fbae447ff09475573782922ea60fe68, disabling compactions & flushes 2023-07-17 22:15:27,112 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632125866.4fbae447ff09475573782922ea60fe68. 2023-07-17 22:15:27,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632125866.4fbae447ff09475573782922ea60fe68. 2023-07-17 22:15:27,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632125866.4fbae447ff09475573782922ea60fe68. after waiting 0 ms 2023-07-17 22:15:27,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632125866.4fbae447ff09475573782922ea60fe68. 2023-07-17 22:15:27,113 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=e9c1589dae6e22017ddc1054e81ae278, regionState=CLOSED 2023-07-17 22:15:27,113 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632127113"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632127113"}]},"ts":"1689632127113"} 2023-07-17 22:15:27,113 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 86a112363b152821b3a882e4e7eedfdd 2023-07-17 22:15:27,113 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d15d9ffaa4279b559a3d4179f5cdd9d2 2023-07-17 22:15:27,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d15d9ffaa4279b559a3d4179f5cdd9d2, disabling compactions & flushes 2023-07-17 22:15:27,114 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2. 2023-07-17 22:15:27,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2. 2023-07-17 22:15:27,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2. after waiting 0 ms 2023-07-17 22:15:27,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2. 2023-07-17 22:15:27,115 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=86a112363b152821b3a882e4e7eedfdd, regionState=CLOSED 2023-07-17 22:15:27,115 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632127115"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632127115"}]},"ts":"1689632127115"} 2023-07-17 22:15:27,120 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=63 2023-07-17 22:15:27,120 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=63, state=SUCCESS; CloseRegionProcedure e9c1589dae6e22017ddc1054e81ae278, server=jenkins-hbase4.apache.org,34647,1689632118064 in 176 msec 2023-07-17 22:15:27,125 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e9c1589dae6e22017ddc1054e81ae278, UNASSIGN in 192 msec 2023-07-17 22:15:27,126 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=61 2023-07-17 22:15:27,126 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=61, state=SUCCESS; CloseRegionProcedure 86a112363b152821b3a882e4e7eedfdd, server=jenkins-hbase4.apache.org,34803,1689632122825 in 183 msec 2023-07-17 22:15:27,128 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=86a112363b152821b3a882e4e7eedfdd, UNASSIGN in 198 msec 2023-07-17 22:15:27,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/4fbae447ff09475573782922ea60fe68/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:27,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/d15d9ffaa4279b559a3d4179f5cdd9d2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:27,133 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632125866.4fbae447ff09475573782922ea60fe68. 2023-07-17 22:15:27,133 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4fbae447ff09475573782922ea60fe68: 2023-07-17 22:15:27,134 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2. 2023-07-17 22:15:27,134 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d15d9ffaa4279b559a3d4179f5cdd9d2: 2023-07-17 22:15:27,136 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4fbae447ff09475573782922ea60fe68 2023-07-17 22:15:27,137 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=4fbae447ff09475573782922ea60fe68, regionState=CLOSED 2023-07-17 22:15:27,137 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689632125866.4fbae447ff09475573782922ea60fe68.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632127137"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632127137"}]},"ts":"1689632127137"} 2023-07-17 22:15:27,138 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d15d9ffaa4279b559a3d4179f5cdd9d2 2023-07-17 22:15:27,138 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close be20919cf8011c405cd066beebb95f34 2023-07-17 22:15:27,139 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing be20919cf8011c405cd066beebb95f34, disabling compactions & flushes 2023-07-17 22:15:27,139 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34. 2023-07-17 22:15:27,139 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34. 2023-07-17 22:15:27,139 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34. after waiting 0 ms 2023-07-17 22:15:27,139 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34. 2023-07-17 22:15:27,140 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=d15d9ffaa4279b559a3d4179f5cdd9d2, regionState=CLOSED 2023-07-17 22:15:27,140 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689632127140"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632127140"}]},"ts":"1689632127140"} 2023-07-17 22:15:27,145 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=64 2023-07-17 22:15:27,145 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=64, state=SUCCESS; CloseRegionProcedure 4fbae447ff09475573782922ea60fe68, server=jenkins-hbase4.apache.org,34647,1689632118064 in 204 msec 2023-07-17 22:15:27,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testTableMoveTruncateAndDrop/be20919cf8011c405cd066beebb95f34/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:27,147 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34. 2023-07-17 22:15:27,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for be20919cf8011c405cd066beebb95f34: 2023-07-17 22:15:27,149 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=62 2023-07-17 22:15:27,149 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=62, state=SUCCESS; CloseRegionProcedure d15d9ffaa4279b559a3d4179f5cdd9d2, server=jenkins-hbase4.apache.org,34803,1689632122825 in 204 msec 2023-07-17 22:15:27,153 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4fbae447ff09475573782922ea60fe68, UNASSIGN in 217 msec 2023-07-17 22:15:27,153 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed be20919cf8011c405cd066beebb95f34 2023-07-17 22:15:27,154 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d15d9ffaa4279b559a3d4179f5cdd9d2, UNASSIGN in 221 msec 2023-07-17 22:15:27,154 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=be20919cf8011c405cd066beebb95f34, regionState=CLOSED 2023-07-17 22:15:27,154 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689632127154"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632127154"}]},"ts":"1689632127154"} 2023-07-17 22:15:27,162 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=65 2023-07-17 22:15:27,162 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=65, state=SUCCESS; CloseRegionProcedure be20919cf8011c405cd066beebb95f34, server=jenkins-hbase4.apache.org,34803,1689632122825 in 219 msec 2023-07-17 22:15:27,176 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=60 2023-07-17 22:15:27,176 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=be20919cf8011c405cd066beebb95f34, UNASSIGN in 234 msec 2023-07-17 22:15:27,177 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632127177"}]},"ts":"1689632127177"} 2023-07-17 22:15:27,179 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-17 22:15:27,182 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-17 22:15:27,186 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=60, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 266 msec 2023-07-17 22:15:27,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-17 22:15:27,226 INFO [Listener at localhost/37695] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 60 completed 2023-07-17 22:15:27,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-17 22:15:27,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-17 22:15:27,249 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-17 22:15:27,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_782668524' 2023-07-17 22:15:27,251 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=71, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-17 22:15:27,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:27,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:27,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:27,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:27,277 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/86a112363b152821b3a882e4e7eedfdd 2023-07-17 22:15:27,277 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e9c1589dae6e22017ddc1054e81ae278 2023-07-17 22:15:27,277 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4fbae447ff09475573782922ea60fe68 2023-07-17 22:15:27,277 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/be20919cf8011c405cd066beebb95f34 2023-07-17 22:15:27,277 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d15d9ffaa4279b559a3d4179f5cdd9d2 2023-07-17 22:15:27,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-17 22:15:27,282 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/86a112363b152821b3a882e4e7eedfdd/f, FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/86a112363b152821b3a882e4e7eedfdd/recovered.edits] 2023-07-17 22:15:27,282 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4fbae447ff09475573782922ea60fe68/f, FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4fbae447ff09475573782922ea60fe68/recovered.edits] 2023-07-17 22:15:27,283 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/be20919cf8011c405cd066beebb95f34/f, FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/be20919cf8011c405cd066beebb95f34/recovered.edits] 2023-07-17 22:15:27,283 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e9c1589dae6e22017ddc1054e81ae278/f, FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e9c1589dae6e22017ddc1054e81ae278/recovered.edits] 2023-07-17 22:15:27,284 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d15d9ffaa4279b559a3d4179f5cdd9d2/f, FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d15d9ffaa4279b559a3d4179f5cdd9d2/recovered.edits] 2023-07-17 22:15:27,302 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/86a112363b152821b3a882e4e7eedfdd/recovered.edits/4.seqid to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/archive/data/default/Group_testTableMoveTruncateAndDrop/86a112363b152821b3a882e4e7eedfdd/recovered.edits/4.seqid 2023-07-17 22:15:27,302 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d15d9ffaa4279b559a3d4179f5cdd9d2/recovered.edits/4.seqid to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/archive/data/default/Group_testTableMoveTruncateAndDrop/d15d9ffaa4279b559a3d4179f5cdd9d2/recovered.edits/4.seqid 2023-07-17 22:15:27,302 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e9c1589dae6e22017ddc1054e81ae278/recovered.edits/4.seqid to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/archive/data/default/Group_testTableMoveTruncateAndDrop/e9c1589dae6e22017ddc1054e81ae278/recovered.edits/4.seqid 2023-07-17 22:15:27,302 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4fbae447ff09475573782922ea60fe68/recovered.edits/4.seqid to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/archive/data/default/Group_testTableMoveTruncateAndDrop/4fbae447ff09475573782922ea60fe68/recovered.edits/4.seqid 2023-07-17 22:15:27,304 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e9c1589dae6e22017ddc1054e81ae278 2023-07-17 22:15:27,304 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/86a112363b152821b3a882e4e7eedfdd 2023-07-17 22:15:27,305 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d15d9ffaa4279b559a3d4179f5cdd9d2 2023-07-17 22:15:27,305 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4fbae447ff09475573782922ea60fe68 2023-07-17 22:15:27,306 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/be20919cf8011c405cd066beebb95f34/recovered.edits/4.seqid to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/archive/data/default/Group_testTableMoveTruncateAndDrop/be20919cf8011c405cd066beebb95f34/recovered.edits/4.seqid 2023-07-17 22:15:27,306 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/be20919cf8011c405cd066beebb95f34 2023-07-17 22:15:27,307 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-17 22:15:27,310 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=71, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-17 22:15:27,318 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-17 22:15:27,321 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-17 22:15:27,322 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=71, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-17 22:15:27,322 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-17 22:15:27,322 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632127322"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:27,323 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632127322"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:27,323 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632127322"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:27,323 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689632125866.4fbae447ff09475573782922ea60fe68.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632127322"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:27,323 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632127322"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:27,336 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-17 22:15:27,336 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 86a112363b152821b3a882e4e7eedfdd, NAME => 'Group_testTableMoveTruncateAndDrop,,1689632125866.86a112363b152821b3a882e4e7eedfdd.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => d15d9ffaa4279b559a3d4179f5cdd9d2, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689632125866.d15d9ffaa4279b559a3d4179f5cdd9d2.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => e9c1589dae6e22017ddc1054e81ae278, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689632125866.e9c1589dae6e22017ddc1054e81ae278.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 4fbae447ff09475573782922ea60fe68, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689632125866.4fbae447ff09475573782922ea60fe68.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => be20919cf8011c405cd066beebb95f34, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689632125866.be20919cf8011c405cd066beebb95f34.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-17 22:15:27,336 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-17 22:15:27,336 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689632127336"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:27,338 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-17 22:15:27,340 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=71, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-17 22:15:27,342 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=71, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 105 msec 2023-07-17 22:15:27,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-17 22:15:27,382 INFO [Listener at localhost/37695] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 71 completed 2023-07-17 22:15:27,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:27,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:27,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:27,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:27,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:27,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:27,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:27,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:34647] to rsgroup default 2023-07-17 22:15:27,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:27,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:27,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:27,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:27,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_782668524, current retry=0 2023-07-17 22:15:27,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34647,1689632118064, jenkins-hbase4.apache.org,34803,1689632122825] are moved back to Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:27,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_782668524 => default 2023-07-17 22:15:27,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:27,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_782668524 2023-07-17 22:15:27,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:27,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:27,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-17 22:15:27,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:27,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:27,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:27,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:27,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:27,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:27,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:27,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:27,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:27,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:27,451 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:27,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:27,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:27,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:27,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:27,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:27,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:27,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:27,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43315] to rsgroup master 2023-07-17 22:15:27,475 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:27,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 146 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58158 deadline: 1689633327475, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. 2023-07-17 22:15:27,476 WARN [Listener at localhost/37695] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:27,479 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:27,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:27,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:27,481 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:41625, jenkins-hbase4.apache.org:42021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:27,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:27,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:27,525 INFO [Listener at localhost/37695] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=491 (was 419) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1970743659_17 at /127.0.0.1:49642 [Receiving block BP-690370225-172.31.14.131-1689632111991:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57139@0x6b9368bf-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-690370225-172.31.14.131-1689632111991:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34803 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x627f7cb-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1970743659_17 at /127.0.0.1:34036 [Receiving block BP-690370225-172.31.14.131-1689632111991:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:34803-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34803 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-497c82a-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x627f7cb-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp808657323-633 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34803 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57139@0x6b9368bf-SendThread(127.0.0.1:57139) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34803 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp808657323-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (548927496) connection to localhost/127.0.0.1:38457 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x63551a-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x627f7cb-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp808657323-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp808657323-635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34803 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34803 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp808657323-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x627f7cb-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x627f7cb-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57139@0x6b9368bf sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1101731022.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34803 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp808657323-636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp808657323-634-acceptor-0@7c6e3c9d-ServerConnector@77e30ea5{HTTP/1.1, (http/1.1)}{0.0.0.0:34185} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1970743659_17 at /127.0.0.1:40486 [Receiving block BP-690370225-172.31.14.131-1689632111991:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1482246376_17 at /127.0.0.1:34024 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-690370225-172.31.14.131-1689632111991:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:34803 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63551a-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:38457 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1482246376_17 at /127.0.0.1:43436 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34803 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34803 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:34803Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b-prefix:jenkins-hbase4.apache.org,34803,1689632122825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x627f7cb-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1970743659_17 at /127.0.0.1:40556 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-690370225-172.31.14.131-1689632111991:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34803 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp808657323-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=772 (was 673) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=366 (was 372), ProcessCount=172 (was 174), AvailableMemoryMB=3426 (was 3666) 2023-07-17 22:15:27,547 INFO [Listener at localhost/37695] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=491, OpenFileDescriptor=772, MaxFileDescriptor=60000, SystemLoadAverage=366, ProcessCount=172, AvailableMemoryMB=3424 2023-07-17 22:15:27,547 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-17 22:15:27,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:27,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:27,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:27,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:27,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:27,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:27,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:27,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:27,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:27,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:27,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:27,569 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:27,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:27,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:27,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:27,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:27,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:27,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:27,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:27,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43315] to rsgroup master 2023-07-17 22:15:27,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:27,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 174 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58158 deadline: 1689633327584, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. 2023-07-17 22:15:27,585 WARN [Listener at localhost/37695] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:27,587 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:27,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:27,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:27,588 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:41625, jenkins-hbase4.apache.org:42021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:27,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:27,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:27,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-17 22:15:27,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:27,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 180 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:58158 deadline: 1689633327590, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-17 22:15:27,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-17 22:15:27,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:27,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:58158 deadline: 1689633327592, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-17 22:15:27,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-17 22:15:27,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:27,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:58158 deadline: 1689633327594, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-17 22:15:27,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-17 22:15:27,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-17 22:15:27,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:27,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:27,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:27,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:27,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:27,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:27,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:27,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:27,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:27,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:27,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:27,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:27,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:27,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-17 22:15:27,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:27,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:27,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-17 22:15:27,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:27,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:27,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:27,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:27,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:27,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:27,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:27,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:27,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:27,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:27,650 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:27,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:27,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:27,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:27,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:27,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:27,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:27,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:27,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43315] to rsgroup master 2023-07-17 22:15:27,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:27,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 218 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58158 deadline: 1689633327676, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. 2023-07-17 22:15:27,677 WARN [Listener at localhost/37695] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:27,679 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:27,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:27,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:27,680 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:41625, jenkins-hbase4.apache.org:42021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:27,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:27,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:27,700 INFO [Listener at localhost/37695] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=494 (was 491) Potentially hanging thread: hconnection-0x63551a-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63551a-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63551a-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=772 (was 772), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=366 (was 366), ProcessCount=172 (was 172), AvailableMemoryMB=3418 (was 3424) 2023-07-17 22:15:27,720 INFO [Listener at localhost/37695] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=494, OpenFileDescriptor=772, MaxFileDescriptor=60000, SystemLoadAverage=366, ProcessCount=172, AvailableMemoryMB=3416 2023-07-17 22:15:27,720 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-17 22:15:27,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:27,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:27,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:27,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:27,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:27,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:27,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:27,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:27,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:27,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:27,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:27,739 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:27,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:27,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:27,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:27,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:27,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:27,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:27,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:27,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43315] to rsgroup master 2023-07-17 22:15:27,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:27,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 246 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58158 deadline: 1689633327752, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. 2023-07-17 22:15:27,753 WARN [Listener at localhost/37695] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:27,755 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:27,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:27,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:27,756 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:41625, jenkins-hbase4.apache.org:42021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:27,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:27,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:27,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:27,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:27,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:27,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:27,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-17 22:15:27,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:27,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-17 22:15:27,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:27,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:27,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:27,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:27,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:27,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:41625] to rsgroup bar 2023-07-17 22:15:27,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:27,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-17 22:15:27,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:27,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:27,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(238): Moving server region fdcdbf251438e26cb4d3816e7324408a, which do not belong to RSGroup bar 2023-07-17 22:15:27,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=fdcdbf251438e26cb4d3816e7324408a, REOPEN/MOVE 2023-07-17 22:15:27,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(238): Moving server region 50dfbd4291683110d06a43487ab94cb0, which do not belong to RSGroup bar 2023-07-17 22:15:27,792 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=fdcdbf251438e26cb4d3816e7324408a, REOPEN/MOVE 2023-07-17 22:15:27,793 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=fdcdbf251438e26cb4d3816e7324408a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:27,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=73, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=50dfbd4291683110d06a43487ab94cb0, REOPEN/MOVE 2023-07-17 22:15:27,793 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689632127793"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632127793"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632127793"}]},"ts":"1689632127793"} 2023-07-17 22:15:27,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-17 22:15:27,794 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=73, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=50dfbd4291683110d06a43487ab94cb0, REOPEN/MOVE 2023-07-17 22:15:27,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-17 22:15:27,795 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=50dfbd4291683110d06a43487ab94cb0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:27,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 3 region(s) to group default, current retry=0 2023-07-17 22:15:27,796 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=74, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-17 22:15:27,796 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689632127795"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632127795"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632127795"}]},"ts":"1689632127795"} 2023-07-17 22:15:27,797 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=72, state=RUNNABLE; CloseRegionProcedure fdcdbf251438e26cb4d3816e7324408a, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:27,798 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41625,1689632118141, state=CLOSING 2023-07-17 22:15:27,798 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=73, state=RUNNABLE; CloseRegionProcedure 50dfbd4291683110d06a43487ab94cb0, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:27,800 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-17 22:15:27,800 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-17 22:15:27,800 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=74, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:27,800 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=76, ppid=73, state=RUNNABLE; CloseRegionProcedure 50dfbd4291683110d06a43487ab94cb0, server=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:27,952 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:27,952 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-17 22:15:27,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fdcdbf251438e26cb4d3816e7324408a, disabling compactions & flushes 2023-07-17 22:15:27,954 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-17 22:15:27,954 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. 2023-07-17 22:15:27,954 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-17 22:15:27,954 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. 2023-07-17 22:15:27,954 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-17 22:15:27,954 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. after waiting 0 ms 2023-07-17 22:15:27,954 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. 2023-07-17 22:15:27,954 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-17 22:15:27,954 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-17 22:15:27,955 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing fdcdbf251438e26cb4d3816e7324408a 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-17 22:15:27,956 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=40.81 KB heapSize=63.08 KB 2023-07-17 22:15:28,052 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/namespace/fdcdbf251438e26cb4d3816e7324408a/.tmp/info/38aa0a5ce5fb4bdcaabde3b2065c0dec 2023-07-17 22:15:28,056 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=37.75 KB at sequenceid=92 (bloomFilter=false), to=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/.tmp/info/5faad7eaa12d4939b02672ce40e111bb 2023-07-17 22:15:28,091 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5faad7eaa12d4939b02672ce40e111bb 2023-07-17 22:15:28,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/namespace/fdcdbf251438e26cb4d3816e7324408a/.tmp/info/38aa0a5ce5fb4bdcaabde3b2065c0dec as hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/namespace/fdcdbf251438e26cb4d3816e7324408a/info/38aa0a5ce5fb4bdcaabde3b2065c0dec 2023-07-17 22:15:28,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/namespace/fdcdbf251438e26cb4d3816e7324408a/info/38aa0a5ce5fb4bdcaabde3b2065c0dec, entries=2, sequenceid=6, filesize=4.8 K 2023-07-17 22:15:28,126 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for fdcdbf251438e26cb4d3816e7324408a in 171ms, sequenceid=6, compaction requested=false 2023-07-17 22:15:28,140 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=92 (bloomFilter=false), to=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/.tmp/rep_barrier/076e16686faf4cf5a1a8be022001b673 2023-07-17 22:15:28,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/namespace/fdcdbf251438e26cb4d3816e7324408a/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-17 22:15:28,149 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. 2023-07-17 22:15:28,149 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fdcdbf251438e26cb4d3816e7324408a: 2023-07-17 22:15:28,149 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding fdcdbf251438e26cb4d3816e7324408a move to jenkins-hbase4.apache.org,42021,1689632117931 record at close sequenceid=6 2023-07-17 22:15:28,150 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 076e16686faf4cf5a1a8be022001b673 2023-07-17 22:15:28,153 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=75, ppid=72, state=RUNNABLE; CloseRegionProcedure fdcdbf251438e26cb4d3816e7324408a, server=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:28,154 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:28,187 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.91 KB at sequenceid=92 (bloomFilter=false), to=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/.tmp/table/7d52eaaa96de4627934aee3fd3f744fe 2023-07-17 22:15:28,195 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7d52eaaa96de4627934aee3fd3f744fe 2023-07-17 22:15:28,196 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/.tmp/info/5faad7eaa12d4939b02672ce40e111bb as hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/info/5faad7eaa12d4939b02672ce40e111bb 2023-07-17 22:15:28,205 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5faad7eaa12d4939b02672ce40e111bb 2023-07-17 22:15:28,206 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/info/5faad7eaa12d4939b02672ce40e111bb, entries=42, sequenceid=92, filesize=9.7 K 2023-07-17 22:15:28,208 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/.tmp/rep_barrier/076e16686faf4cf5a1a8be022001b673 as hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/rep_barrier/076e16686faf4cf5a1a8be022001b673 2023-07-17 22:15:28,216 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 076e16686faf4cf5a1a8be022001b673 2023-07-17 22:15:28,216 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/rep_barrier/076e16686faf4cf5a1a8be022001b673, entries=10, sequenceid=92, filesize=6.1 K 2023-07-17 22:15:28,217 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/.tmp/table/7d52eaaa96de4627934aee3fd3f744fe as hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/table/7d52eaaa96de4627934aee3fd3f744fe 2023-07-17 22:15:28,225 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7d52eaaa96de4627934aee3fd3f744fe 2023-07-17 22:15:28,225 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/table/7d52eaaa96de4627934aee3fd3f744fe, entries=15, sequenceid=92, filesize=6.2 K 2023-07-17 22:15:28,226 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~40.81 KB/41791, heapSize ~63.03 KB/64544, currentSize=0 B/0 for 1588230740 in 271ms, sequenceid=92, compaction requested=false 2023-07-17 22:15:28,240 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/recovered.edits/95.seqid, newMaxSeqId=95, maxSeqId=1 2023-07-17 22:15:28,240 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-17 22:15:28,241 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-17 22:15:28,241 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-17 22:15:28,241 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,42021,1689632117931 record at close sequenceid=92 2023-07-17 22:15:28,243 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-17 22:15:28,244 WARN [PEWorker-3] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-17 22:15:28,245 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=74 2023-07-17 22:15:28,246 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=74, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41625,1689632118141 in 444 msec 2023-07-17 22:15:28,246 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=74, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42021,1689632117931; forceNewPlan=false, retain=false 2023-07-17 22:15:28,397 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42021,1689632117931, state=OPENING 2023-07-17 22:15:28,399 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-17 22:15:28,400 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=74, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:28,400 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-17 22:15:28,556 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-17 22:15:28,557 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 22:15:28,559 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42021%2C1689632117931.meta, suffix=.meta, logDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/WALs/jenkins-hbase4.apache.org,42021,1689632117931, archiveDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/oldWALs, maxLogs=32 2023-07-17 22:15:28,579 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44577,DS-3d412bac-d6ee-40f8-b24e-fa4cf5d7d6ec,DISK] 2023-07-17 22:15:28,579 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44355,DS-8d3ce80f-cbe8-4b6e-94f2-9c9f4fe1c3b4,DISK] 2023-07-17 22:15:28,583 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45423,DS-92bde3d4-91ce-4c0f-9241-ef4d25e6ef6e,DISK] 2023-07-17 22:15:28,586 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/WALs/jenkins-hbase4.apache.org,42021,1689632117931/jenkins-hbase4.apache.org%2C42021%2C1689632117931.meta.1689632128560.meta 2023-07-17 22:15:28,586 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44577,DS-3d412bac-d6ee-40f8-b24e-fa4cf5d7d6ec,DISK], DatanodeInfoWithStorage[127.0.0.1:44355,DS-8d3ce80f-cbe8-4b6e-94f2-9c9f4fe1c3b4,DISK], DatanodeInfoWithStorage[127.0.0.1:45423,DS-92bde3d4-91ce-4c0f-9241-ef4d25e6ef6e,DISK]] 2023-07-17 22:15:28,587 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:28,587 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-17 22:15:28,587 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-17 22:15:28,587 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-17 22:15:28,587 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-17 22:15:28,587 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:28,587 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-17 22:15:28,587 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-17 22:15:28,589 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-17 22:15:28,591 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/info 2023-07-17 22:15:28,591 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/info 2023-07-17 22:15:28,591 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-17 22:15:28,599 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5faad7eaa12d4939b02672ce40e111bb 2023-07-17 22:15:28,599 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/info/5faad7eaa12d4939b02672ce40e111bb 2023-07-17 22:15:28,600 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:28,600 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-17 22:15:28,601 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/rep_barrier 2023-07-17 22:15:28,601 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/rep_barrier 2023-07-17 22:15:28,602 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-17 22:15:28,610 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 076e16686faf4cf5a1a8be022001b673 2023-07-17 22:15:28,610 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/rep_barrier/076e16686faf4cf5a1a8be022001b673 2023-07-17 22:15:28,611 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:28,611 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-17 22:15:28,612 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/table 2023-07-17 22:15:28,612 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/table 2023-07-17 22:15:28,613 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-17 22:15:28,632 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7d52eaaa96de4627934aee3fd3f744fe 2023-07-17 22:15:28,632 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/table/7d52eaaa96de4627934aee3fd3f744fe 2023-07-17 22:15:28,632 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:28,633 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740 2023-07-17 22:15:28,635 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740 2023-07-17 22:15:28,637 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-17 22:15:28,639 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-17 22:15:28,640 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=96; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11035040960, jitterRate=0.027718275785446167}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-17 22:15:28,640 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-17 22:15:28,641 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=78, masterSystemTime=1689632128552 2023-07-17 22:15:28,643 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-17 22:15:28,643 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-17 22:15:28,643 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42021,1689632117931, state=OPEN 2023-07-17 22:15:28,645 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-17 22:15:28,645 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-17 22:15:28,647 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=fdcdbf251438e26cb4d3816e7324408a, regionState=CLOSED 2023-07-17 22:15:28,647 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689632128647"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632128647"}]},"ts":"1689632128647"} 2023-07-17 22:15:28,650 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41625] ipc.CallRunner(144): callId: 178 service: ClientService methodName: Mutate size: 218 connection: 172.31.14.131:57014 deadline: 1689632188648, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42021 startCode=1689632117931. As of locationSeqNum=92. 2023-07-17 22:15:28,651 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=74 2023-07-17 22:15:28,651 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=74, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42021,1689632117931 in 246 msec 2023-07-17 22:15:28,653 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 857 msec 2023-07-17 22:15:28,752 DEBUG [PEWorker-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 22:15:28,753 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40026, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 22:15:28,761 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=72 2023-07-17 22:15:28,761 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=72, state=SUCCESS; CloseRegionProcedure fdcdbf251438e26cb4d3816e7324408a, server=jenkins-hbase4.apache.org,41625,1689632118141 in 958 msec 2023-07-17 22:15:28,763 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=fdcdbf251438e26cb4d3816e7324408a, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42021,1689632117931; forceNewPlan=false, retain=false 2023-07-17 22:15:28,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure.ProcedureSyncWait(216): waitFor pid=72 2023-07-17 22:15:28,797 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 50dfbd4291683110d06a43487ab94cb0 2023-07-17 22:15:28,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 50dfbd4291683110d06a43487ab94cb0, disabling compactions & flushes 2023-07-17 22:15:28,799 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. 2023-07-17 22:15:28,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. 2023-07-17 22:15:28,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. after waiting 0 ms 2023-07-17 22:15:28,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. 2023-07-17 22:15:28,799 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 50dfbd4291683110d06a43487ab94cb0 1/1 column families, dataSize=6.36 KB heapSize=10.50 KB 2023-07-17 22:15:28,823 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.36 KB at sequenceid=26 (bloomFilter=true), to=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0/.tmp/m/0217a03bcc814095a775bf2d28f9290f 2023-07-17 22:15:28,839 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0217a03bcc814095a775bf2d28f9290f 2023-07-17 22:15:28,841 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0/.tmp/m/0217a03bcc814095a775bf2d28f9290f as hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0/m/0217a03bcc814095a775bf2d28f9290f 2023-07-17 22:15:28,849 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0217a03bcc814095a775bf2d28f9290f 2023-07-17 22:15:28,849 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0/m/0217a03bcc814095a775bf2d28f9290f, entries=9, sequenceid=26, filesize=5.5 K 2023-07-17 22:15:28,850 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.36 KB/6514, heapSize ~10.48 KB/10736, currentSize=0 B/0 for 50dfbd4291683110d06a43487ab94cb0 in 51ms, sequenceid=26, compaction requested=false 2023-07-17 22:15:28,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-17 22:15:28,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-17 22:15:28,862 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. 2023-07-17 22:15:28,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 50dfbd4291683110d06a43487ab94cb0: 2023-07-17 22:15:28,862 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 50dfbd4291683110d06a43487ab94cb0 move to jenkins-hbase4.apache.org,42021,1689632117931 record at close sequenceid=26 2023-07-17 22:15:28,864 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 50dfbd4291683110d06a43487ab94cb0 2023-07-17 22:15:28,864 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=50dfbd4291683110d06a43487ab94cb0, regionState=CLOSED 2023-07-17 22:15:28,864 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689632128864"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632128864"}]},"ts":"1689632128864"} 2023-07-17 22:15:28,871 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=73 2023-07-17 22:15:28,871 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=73, state=SUCCESS; CloseRegionProcedure 50dfbd4291683110d06a43487ab94cb0, server=jenkins-hbase4.apache.org,41625,1689632118141 in 1.0680 sec 2023-07-17 22:15:28,871 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=73, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=50dfbd4291683110d06a43487ab94cb0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42021,1689632117931; forceNewPlan=false, retain=false 2023-07-17 22:15:28,872 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=fdcdbf251438e26cb4d3816e7324408a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:28,872 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689632128872"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632128872"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632128872"}]},"ts":"1689632128872"} 2023-07-17 22:15:28,872 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=50dfbd4291683110d06a43487ab94cb0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:28,872 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689632128872"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632128872"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632128872"}]},"ts":"1689632128872"} 2023-07-17 22:15:28,876 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=72, state=RUNNABLE; OpenRegionProcedure fdcdbf251438e26cb4d3816e7324408a, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:28,876 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=73, state=RUNNABLE; OpenRegionProcedure 50dfbd4291683110d06a43487ab94cb0, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:29,032 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. 2023-07-17 22:15:29,032 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fdcdbf251438e26cb4d3816e7324408a, NAME => 'hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:29,032 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:29,032 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:29,032 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:29,032 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:29,034 INFO [StoreOpener-fdcdbf251438e26cb4d3816e7324408a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:29,035 DEBUG [StoreOpener-fdcdbf251438e26cb4d3816e7324408a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/namespace/fdcdbf251438e26cb4d3816e7324408a/info 2023-07-17 22:15:29,035 DEBUG [StoreOpener-fdcdbf251438e26cb4d3816e7324408a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/namespace/fdcdbf251438e26cb4d3816e7324408a/info 2023-07-17 22:15:29,035 INFO [StoreOpener-fdcdbf251438e26cb4d3816e7324408a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fdcdbf251438e26cb4d3816e7324408a columnFamilyName info 2023-07-17 22:15:29,045 DEBUG [StoreOpener-fdcdbf251438e26cb4d3816e7324408a-1] regionserver.HStore(539): loaded hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/namespace/fdcdbf251438e26cb4d3816e7324408a/info/38aa0a5ce5fb4bdcaabde3b2065c0dec 2023-07-17 22:15:29,045 INFO [StoreOpener-fdcdbf251438e26cb4d3816e7324408a-1] regionserver.HStore(310): Store=fdcdbf251438e26cb4d3816e7324408a/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:29,046 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/namespace/fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:29,048 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/namespace/fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:29,051 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:29,052 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fdcdbf251438e26cb4d3816e7324408a; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10466806080, jitterRate=-0.02520272135734558}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:29,052 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fdcdbf251438e26cb4d3816e7324408a: 2023-07-17 22:15:29,053 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a., pid=79, masterSystemTime=1689632129027 2023-07-17 22:15:29,055 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. 2023-07-17 22:15:29,055 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. 2023-07-17 22:15:29,055 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. 2023-07-17 22:15:29,055 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 50dfbd4291683110d06a43487ab94cb0, NAME => 'hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:29,055 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=fdcdbf251438e26cb4d3816e7324408a, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:29,055 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-17 22:15:29,055 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689632129055"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632129055"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632129055"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632129055"}]},"ts":"1689632129055"} 2023-07-17 22:15:29,055 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. service=MultiRowMutationService 2023-07-17 22:15:29,056 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-17 22:15:29,056 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 50dfbd4291683110d06a43487ab94cb0 2023-07-17 22:15:29,056 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:29,056 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 50dfbd4291683110d06a43487ab94cb0 2023-07-17 22:15:29,056 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 50dfbd4291683110d06a43487ab94cb0 2023-07-17 22:15:29,059 INFO [StoreOpener-50dfbd4291683110d06a43487ab94cb0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 50dfbd4291683110d06a43487ab94cb0 2023-07-17 22:15:29,059 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=72 2023-07-17 22:15:29,059 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=72, state=SUCCESS; OpenRegionProcedure fdcdbf251438e26cb4d3816e7324408a, server=jenkins-hbase4.apache.org,42021,1689632117931 in 184 msec 2023-07-17 22:15:29,060 DEBUG [StoreOpener-50dfbd4291683110d06a43487ab94cb0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0/m 2023-07-17 22:15:29,060 DEBUG [StoreOpener-50dfbd4291683110d06a43487ab94cb0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0/m 2023-07-17 22:15:29,060 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=72, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=fdcdbf251438e26cb4d3816e7324408a, REOPEN/MOVE in 1.2700 sec 2023-07-17 22:15:29,060 INFO [StoreOpener-50dfbd4291683110d06a43487ab94cb0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 50dfbd4291683110d06a43487ab94cb0 columnFamilyName m 2023-07-17 22:15:29,068 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0217a03bcc814095a775bf2d28f9290f 2023-07-17 22:15:29,068 DEBUG [StoreOpener-50dfbd4291683110d06a43487ab94cb0-1] regionserver.HStore(539): loaded hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0/m/0217a03bcc814095a775bf2d28f9290f 2023-07-17 22:15:29,069 INFO [StoreOpener-50dfbd4291683110d06a43487ab94cb0-1] regionserver.HStore(310): Store=50dfbd4291683110d06a43487ab94cb0/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:29,069 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0 2023-07-17 22:15:29,071 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0 2023-07-17 22:15:29,074 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 50dfbd4291683110d06a43487ab94cb0 2023-07-17 22:15:29,075 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 50dfbd4291683110d06a43487ab94cb0; next sequenceid=30; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@172531e, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:29,075 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 50dfbd4291683110d06a43487ab94cb0: 2023-07-17 22:15:29,076 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0., pid=80, masterSystemTime=1689632129027 2023-07-17 22:15:29,078 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. 2023-07-17 22:15:29,078 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. 2023-07-17 22:15:29,078 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=50dfbd4291683110d06a43487ab94cb0, regionState=OPEN, openSeqNum=30, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:29,078 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689632129078"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632129078"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632129078"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632129078"}]},"ts":"1689632129078"} 2023-07-17 22:15:29,082 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=73 2023-07-17 22:15:29,082 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=73, state=SUCCESS; OpenRegionProcedure 50dfbd4291683110d06a43487ab94cb0, server=jenkins-hbase4.apache.org,42021,1689632117931 in 204 msec 2023-07-17 22:15:29,083 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=73, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=50dfbd4291683110d06a43487ab94cb0, REOPEN/MOVE in 1.2900 sec 2023-07-17 22:15:29,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34647,1689632118064, jenkins-hbase4.apache.org,34803,1689632122825, jenkins-hbase4.apache.org,41625,1689632118141] are moved back to default 2023-07-17 22:15:29,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-17 22:15:29,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:29,798 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41625] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:57018 deadline: 1689632189798, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42021 startCode=1689632117931. As of locationSeqNum=26. 2023-07-17 22:15:29,902 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41625] ipc.CallRunner(144): callId: 12 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:57018 deadline: 1689632189902, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42021 startCode=1689632117931. As of locationSeqNum=92. 2023-07-17 22:15:30,004 DEBUG [hconnection-0x63551a-shared-pool-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 22:15:30,012 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40034, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 22:15:30,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:30,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:30,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-17 22:15:30,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:30,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:30,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-17 22:15:30,043 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 22:15:30,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 81 2023-07-17 22:15:30,044 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41625] ipc.CallRunner(144): callId: 187 service: ClientService methodName: ExecService size: 528 connection: 172.31.14.131:57014 deadline: 1689632190044, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42021 startCode=1689632117931. As of locationSeqNum=26. 2023-07-17 22:15:30,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-17 22:15:30,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-17 22:15:30,149 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:30,150 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-17 22:15:30,150 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:30,151 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:30,153 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 22:15:30,155 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:30,155 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f empty. 2023-07-17 22:15:30,156 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:30,156 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-17 22:15:30,172 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:30,174 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7bef67d0d9c7a79bb95720f34e180b3f, NAME => 'Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:30,186 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:30,186 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 7bef67d0d9c7a79bb95720f34e180b3f, disabling compactions & flushes 2023-07-17 22:15:30,186 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:30,186 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:30,186 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. after waiting 0 ms 2023-07-17 22:15:30,186 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:30,186 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:30,186 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 7bef67d0d9c7a79bb95720f34e180b3f: 2023-07-17 22:15:30,189 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 22:15:30,190 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689632130190"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632130190"}]},"ts":"1689632130190"} 2023-07-17 22:15:30,192 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 22:15:30,192 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 22:15:30,192 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632130192"}]},"ts":"1689632130192"} 2023-07-17 22:15:30,194 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-17 22:15:30,201 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7bef67d0d9c7a79bb95720f34e180b3f, ASSIGN}] 2023-07-17 22:15:30,203 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7bef67d0d9c7a79bb95720f34e180b3f, ASSIGN 2023-07-17 22:15:30,204 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7bef67d0d9c7a79bb95720f34e180b3f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42021,1689632117931; forceNewPlan=false, retain=false 2023-07-17 22:15:30,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-17 22:15:30,355 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=7bef67d0d9c7a79bb95720f34e180b3f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:30,356 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689632130355"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632130355"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632130355"}]},"ts":"1689632130355"} 2023-07-17 22:15:30,357 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; OpenRegionProcedure 7bef67d0d9c7a79bb95720f34e180b3f, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:30,513 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:30,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7bef67d0d9c7a79bb95720f34e180b3f, NAME => 'Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:30,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:30,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:30,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:30,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:30,516 INFO [StoreOpener-7bef67d0d9c7a79bb95720f34e180b3f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:30,518 DEBUG [StoreOpener-7bef67d0d9c7a79bb95720f34e180b3f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f/f 2023-07-17 22:15:30,518 DEBUG [StoreOpener-7bef67d0d9c7a79bb95720f34e180b3f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f/f 2023-07-17 22:15:30,518 INFO [StoreOpener-7bef67d0d9c7a79bb95720f34e180b3f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7bef67d0d9c7a79bb95720f34e180b3f columnFamilyName f 2023-07-17 22:15:30,519 INFO [StoreOpener-7bef67d0d9c7a79bb95720f34e180b3f-1] regionserver.HStore(310): Store=7bef67d0d9c7a79bb95720f34e180b3f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:30,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:30,521 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:30,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:30,527 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:30,528 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7bef67d0d9c7a79bb95720f34e180b3f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10786930880, jitterRate=0.004611223936080933}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:30,528 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7bef67d0d9c7a79bb95720f34e180b3f: 2023-07-17 22:15:30,530 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f., pid=83, masterSystemTime=1689632130509 2023-07-17 22:15:30,532 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:30,532 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:30,532 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=7bef67d0d9c7a79bb95720f34e180b3f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:30,532 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689632130532"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632130532"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632130532"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632130532"}]},"ts":"1689632130532"} 2023-07-17 22:15:30,537 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-17 22:15:30,537 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; OpenRegionProcedure 7bef67d0d9c7a79bb95720f34e180b3f, server=jenkins-hbase4.apache.org,42021,1689632117931 in 177 msec 2023-07-17 22:15:30,540 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-17 22:15:30,540 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7bef67d0d9c7a79bb95720f34e180b3f, ASSIGN in 336 msec 2023-07-17 22:15:30,541 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 22:15:30,541 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632130541"}]},"ts":"1689632130541"} 2023-07-17 22:15:30,543 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-17 22:15:30,546 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 22:15:30,547 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 506 msec 2023-07-17 22:15:30,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-17 22:15:30,650 INFO [Listener at localhost/37695] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-17 22:15:30,650 DEBUG [Listener at localhost/37695] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-17 22:15:30,650 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:30,651 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41625] ipc.CallRunner(144): callId: 275 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:57016 deadline: 1689632190651, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42021 startCode=1689632117931. As of locationSeqNum=92. 2023-07-17 22:15:30,755 DEBUG [hconnection-0x5b2f1cbe-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 22:15:30,756 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57576, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 22:15:30,763 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-17 22:15:30,764 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:30,764 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-17 22:15:30,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-17 22:15:30,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:30,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-17 22:15:30,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:30,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:30,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-17 22:15:30,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(345): Moving region 7bef67d0d9c7a79bb95720f34e180b3f to RSGroup bar 2023-07-17 22:15:30,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:30,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:30,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:30,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:30,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-17 22:15:30,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:30,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7bef67d0d9c7a79bb95720f34e180b3f, REOPEN/MOVE 2023-07-17 22:15:30,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-17 22:15:30,774 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7bef67d0d9c7a79bb95720f34e180b3f, REOPEN/MOVE 2023-07-17 22:15:30,775 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=7bef67d0d9c7a79bb95720f34e180b3f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:30,775 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689632130775"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632130775"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632130775"}]},"ts":"1689632130775"} 2023-07-17 22:15:30,777 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure 7bef67d0d9c7a79bb95720f34e180b3f, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:30,932 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:30,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7bef67d0d9c7a79bb95720f34e180b3f, disabling compactions & flushes 2023-07-17 22:15:30,933 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:30,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:30,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. after waiting 0 ms 2023-07-17 22:15:30,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:30,938 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:30,939 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:30,939 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7bef67d0d9c7a79bb95720f34e180b3f: 2023-07-17 22:15:30,939 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7bef67d0d9c7a79bb95720f34e180b3f move to jenkins-hbase4.apache.org,41625,1689632118141 record at close sequenceid=2 2023-07-17 22:15:30,941 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:30,942 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=7bef67d0d9c7a79bb95720f34e180b3f, regionState=CLOSED 2023-07-17 22:15:30,942 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689632130942"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632130942"}]},"ts":"1689632130942"} 2023-07-17 22:15:30,946 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-17 22:15:30,946 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure 7bef67d0d9c7a79bb95720f34e180b3f, server=jenkins-hbase4.apache.org,42021,1689632117931 in 167 msec 2023-07-17 22:15:30,946 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7bef67d0d9c7a79bb95720f34e180b3f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41625,1689632118141; forceNewPlan=false, retain=false 2023-07-17 22:15:31,097 INFO [jenkins-hbase4:43315] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 22:15:31,097 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=7bef67d0d9c7a79bb95720f34e180b3f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:31,097 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689632131097"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632131097"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632131097"}]},"ts":"1689632131097"} 2023-07-17 22:15:31,099 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure 7bef67d0d9c7a79bb95720f34e180b3f, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:31,256 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:31,256 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7bef67d0d9c7a79bb95720f34e180b3f, NAME => 'Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:31,257 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:31,257 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:31,257 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:31,257 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:31,265 INFO [StoreOpener-7bef67d0d9c7a79bb95720f34e180b3f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:31,266 DEBUG [StoreOpener-7bef67d0d9c7a79bb95720f34e180b3f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f/f 2023-07-17 22:15:31,266 DEBUG [StoreOpener-7bef67d0d9c7a79bb95720f34e180b3f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f/f 2023-07-17 22:15:31,267 INFO [StoreOpener-7bef67d0d9c7a79bb95720f34e180b3f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7bef67d0d9c7a79bb95720f34e180b3f columnFamilyName f 2023-07-17 22:15:31,267 INFO [StoreOpener-7bef67d0d9c7a79bb95720f34e180b3f-1] regionserver.HStore(310): Store=7bef67d0d9c7a79bb95720f34e180b3f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:31,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:31,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:31,280 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:31,282 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7bef67d0d9c7a79bb95720f34e180b3f; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9899712160, jitterRate=-0.078017458319664}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:31,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7bef67d0d9c7a79bb95720f34e180b3f: 2023-07-17 22:15:31,288 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f., pid=86, masterSystemTime=1689632131251 2023-07-17 22:15:31,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:31,292 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:31,293 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=7bef67d0d9c7a79bb95720f34e180b3f, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:31,293 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689632131293"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632131293"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632131293"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632131293"}]},"ts":"1689632131293"} 2023-07-17 22:15:31,299 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-17 22:15:31,299 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure 7bef67d0d9c7a79bb95720f34e180b3f, server=jenkins-hbase4.apache.org,41625,1689632118141 in 196 msec 2023-07-17 22:15:31,304 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7bef67d0d9c7a79bb95720f34e180b3f, REOPEN/MOVE in 527 msec 2023-07-17 22:15:31,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-17 22:15:31,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-17 22:15:31,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:31,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:31,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:31,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-17 22:15:31,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:31,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-17 22:15:31,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:31,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 285 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:58158 deadline: 1689633331797, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-17 22:15:31,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:41625] to rsgroup default 2023-07-17 22:15:31,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:31,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:58158 deadline: 1689633331800, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-17 22:15:31,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-17 22:15:31,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:31,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-17 22:15:31,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:31,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:31,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-17 22:15:31,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(345): Moving region 7bef67d0d9c7a79bb95720f34e180b3f to RSGroup default 2023-07-17 22:15:31,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7bef67d0d9c7a79bb95720f34e180b3f, REOPEN/MOVE 2023-07-17 22:15:31,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-17 22:15:31,814 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7bef67d0d9c7a79bb95720f34e180b3f, REOPEN/MOVE 2023-07-17 22:15:31,815 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=7bef67d0d9c7a79bb95720f34e180b3f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:31,815 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689632131815"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632131815"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632131815"}]},"ts":"1689632131815"} 2023-07-17 22:15:31,820 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure 7bef67d0d9c7a79bb95720f34e180b3f, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:31,911 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-17 22:15:31,974 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:31,979 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7bef67d0d9c7a79bb95720f34e180b3f, disabling compactions & flushes 2023-07-17 22:15:31,979 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:31,979 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:31,979 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. after waiting 0 ms 2023-07-17 22:15:31,979 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:31,984 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 22:15:31,986 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:31,986 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7bef67d0d9c7a79bb95720f34e180b3f: 2023-07-17 22:15:31,986 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7bef67d0d9c7a79bb95720f34e180b3f move to jenkins-hbase4.apache.org,42021,1689632117931 record at close sequenceid=5 2023-07-17 22:15:31,988 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:31,989 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=7bef67d0d9c7a79bb95720f34e180b3f, regionState=CLOSED 2023-07-17 22:15:31,989 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689632131989"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632131989"}]},"ts":"1689632131989"} 2023-07-17 22:15:31,992 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-17 22:15:31,992 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure 7bef67d0d9c7a79bb95720f34e180b3f, server=jenkins-hbase4.apache.org,41625,1689632118141 in 173 msec 2023-07-17 22:15:31,993 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7bef67d0d9c7a79bb95720f34e180b3f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42021,1689632117931; forceNewPlan=false, retain=false 2023-07-17 22:15:32,144 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=7bef67d0d9c7a79bb95720f34e180b3f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:32,144 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689632132144"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632132144"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632132144"}]},"ts":"1689632132144"} 2023-07-17 22:15:32,146 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure 7bef67d0d9c7a79bb95720f34e180b3f, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:32,302 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:32,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7bef67d0d9c7a79bb95720f34e180b3f, NAME => 'Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:32,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:32,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:32,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:32,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:32,304 INFO [StoreOpener-7bef67d0d9c7a79bb95720f34e180b3f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:32,305 DEBUG [StoreOpener-7bef67d0d9c7a79bb95720f34e180b3f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f/f 2023-07-17 22:15:32,305 DEBUG [StoreOpener-7bef67d0d9c7a79bb95720f34e180b3f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f/f 2023-07-17 22:15:32,306 INFO [StoreOpener-7bef67d0d9c7a79bb95720f34e180b3f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7bef67d0d9c7a79bb95720f34e180b3f columnFamilyName f 2023-07-17 22:15:32,306 INFO [StoreOpener-7bef67d0d9c7a79bb95720f34e180b3f-1] regionserver.HStore(310): Store=7bef67d0d9c7a79bb95720f34e180b3f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:32,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:32,308 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:32,311 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:32,311 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7bef67d0d9c7a79bb95720f34e180b3f; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11949194560, jitterRate=0.112855464220047}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:32,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7bef67d0d9c7a79bb95720f34e180b3f: 2023-07-17 22:15:32,312 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f., pid=89, masterSystemTime=1689632132298 2023-07-17 22:15:32,314 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:32,314 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:32,314 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=7bef67d0d9c7a79bb95720f34e180b3f, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:32,315 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689632132314"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632132314"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632132314"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632132314"}]},"ts":"1689632132314"} 2023-07-17 22:15:32,318 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-17 22:15:32,318 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure 7bef67d0d9c7a79bb95720f34e180b3f, server=jenkins-hbase4.apache.org,42021,1689632117931 in 170 msec 2023-07-17 22:15:32,319 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7bef67d0d9c7a79bb95720f34e180b3f, REOPEN/MOVE in 508 msec 2023-07-17 22:15:32,796 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testFailRemoveGroup' 2023-07-17 22:15:32,797 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-17 22:15:32,798 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-17 22:15:32,798 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-17 22:15:32,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-17 22:15:32,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-17 22:15:32,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:32,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:32,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:32,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-17 22:15:32,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:32,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 294 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:58158 deadline: 1689633332827, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-17 22:15:32,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:41625] to rsgroup default 2023-07-17 22:15:32,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:32,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-17 22:15:32,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:32,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:32,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-17 22:15:32,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34647,1689632118064, jenkins-hbase4.apache.org,34803,1689632122825, jenkins-hbase4.apache.org,41625,1689632118141] are moved back to bar 2023-07-17 22:15:32,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-17 22:15:32,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:32,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:32,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:32,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-17 22:15:32,845 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41625] ipc.CallRunner(144): callId: 212 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:57014 deadline: 1689632192845, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42021 startCode=1689632117931. As of locationSeqNum=6. 2023-07-17 22:15:32,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:32,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:32,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-17 22:15:32,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:32,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:32,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:32,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:32,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:32,974 INFO [Listener at localhost/37695] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-17 22:15:32,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-17 22:15:32,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-17 22:15:32,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-17 22:15:32,980 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632132980"}]},"ts":"1689632132980"} 2023-07-17 22:15:32,982 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-17 22:15:32,984 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-17 22:15:32,985 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7bef67d0d9c7a79bb95720f34e180b3f, UNASSIGN}] 2023-07-17 22:15:32,987 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7bef67d0d9c7a79bb95720f34e180b3f, UNASSIGN 2023-07-17 22:15:32,988 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=7bef67d0d9c7a79bb95720f34e180b3f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:32,988 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689632132988"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632132988"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632132988"}]},"ts":"1689632132988"} 2023-07-17 22:15:32,990 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE; CloseRegionProcedure 7bef67d0d9c7a79bb95720f34e180b3f, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:33,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-17 22:15:33,156 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:33,157 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7bef67d0d9c7a79bb95720f34e180b3f, disabling compactions & flushes 2023-07-17 22:15:33,157 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:33,157 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:33,158 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. after waiting 0 ms 2023-07-17 22:15:33,158 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:33,163 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-17 22:15:33,164 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f. 2023-07-17 22:15:33,164 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7bef67d0d9c7a79bb95720f34e180b3f: 2023-07-17 22:15:33,166 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:33,167 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=7bef67d0d9c7a79bb95720f34e180b3f, regionState=CLOSED 2023-07-17 22:15:33,167 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689632133167"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632133167"}]},"ts":"1689632133167"} 2023-07-17 22:15:33,177 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-17 22:15:33,177 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; CloseRegionProcedure 7bef67d0d9c7a79bb95720f34e180b3f, server=jenkins-hbase4.apache.org,42021,1689632117931 in 180 msec 2023-07-17 22:15:33,179 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-17 22:15:33,180 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7bef67d0d9c7a79bb95720f34e180b3f, UNASSIGN in 192 msec 2023-07-17 22:15:33,180 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632133180"}]},"ts":"1689632133180"} 2023-07-17 22:15:33,182 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-17 22:15:33,186 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-17 22:15:33,188 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 210 msec 2023-07-17 22:15:33,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-17 22:15:33,283 INFO [Listener at localhost/37695] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-17 22:15:33,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-17 22:15:33,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-17 22:15:33,287 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-17 22:15:33,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-17 22:15:33,288 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=93, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-17 22:15:33,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:33,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:33,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:33,293 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:33,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-17 22:15:33,295 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f/f, FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f/recovered.edits] 2023-07-17 22:15:33,301 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f/recovered.edits/10.seqid to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/archive/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f/recovered.edits/10.seqid 2023-07-17 22:15:33,302 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testFailRemoveGroup/7bef67d0d9c7a79bb95720f34e180b3f 2023-07-17 22:15:33,302 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-17 22:15:33,305 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=93, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-17 22:15:33,308 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-17 22:15:33,338 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-17 22:15:33,339 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=93, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-17 22:15:33,340 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-17 22:15:33,340 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632133340"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:33,342 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-17 22:15:33,342 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 7bef67d0d9c7a79bb95720f34e180b3f, NAME => 'Group_testFailRemoveGroup,,1689632130040.7bef67d0d9c7a79bb95720f34e180b3f.', STARTKEY => '', ENDKEY => ''}] 2023-07-17 22:15:33,342 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-17 22:15:33,342 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689632133342"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:33,343 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-17 22:15:33,345 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=93, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-17 22:15:33,346 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 61 msec 2023-07-17 22:15:33,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-17 22:15:33,396 INFO [Listener at localhost/37695] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-17 22:15:33,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:33,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:33,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:33,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:33,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:33,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:33,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:33,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:33,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:33,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:33,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:33,414 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:33,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:33,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:33,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:33,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:33,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:33,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:33,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:33,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43315] to rsgroup master 2023-07-17 22:15:33,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:33,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 342 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58158 deadline: 1689633333426, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. 2023-07-17 22:15:33,427 WARN [Listener at localhost/37695] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:33,429 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:33,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:33,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:33,430 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:41625, jenkins-hbase4.apache.org:42021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:33,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:33,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:33,454 INFO [Listener at localhost/37695] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=510 (was 494) Potentially hanging thread: hconnection-0x627f7cb-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2075976686_17 at /127.0.0.1:34184 [Receiving block BP-690370225-172.31.14.131-1689632111991:blk_1073741857_1033] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63551a-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x627f7cb-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-41553565_17 at /127.0.0.1:49800 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/cluster_2e37384a-437b-4b5a-b559-34afc86ec314/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b-prefix:jenkins-hbase4.apache.org,42021,1689632117931.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/cluster_2e37384a-437b-4b5a-b559-34afc86ec314/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/cluster_2e37384a-437b-4b5a-b559-34afc86ec314/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x627f7cb-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63551a-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-41553565_17 at /127.0.0.1:34236 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-690370225-172.31.14.131-1689632111991:blk_1073741857_1033, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63551a-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63551a-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/cluster_2e37384a-437b-4b5a-b559-34afc86ec314/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-690370225-172.31.14.131-1689632111991:blk_1073741857_1033, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x627f7cb-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63551a-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x627f7cb-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-690370225-172.31.14.131-1689632111991:blk_1073741857_1033, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2075976686_17 at /127.0.0.1:49792 [Receiving block BP-690370225-172.31.14.131-1689632111991:blk_1073741857_1033] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2075976686_17 at /127.0.0.1:40660 [Receiving block BP-690370225-172.31.14.131-1689632111991:blk_1073741857_1033] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5b2f1cbe-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x627f7cb-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=787 (was 772) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=353 (was 366), ProcessCount=172 (was 172), AvailableMemoryMB=3153 (was 3416) 2023-07-17 22:15:33,455 WARN [Listener at localhost/37695] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-17 22:15:33,476 INFO [Listener at localhost/37695] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=510, OpenFileDescriptor=787, MaxFileDescriptor=60000, SystemLoadAverage=353, ProcessCount=172, AvailableMemoryMB=3152 2023-07-17 22:15:33,477 WARN [Listener at localhost/37695] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-17 22:15:33,477 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-17 22:15:33,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:33,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:33,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:33,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:33,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:33,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:33,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:33,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:33,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:33,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:33,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:33,494 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:33,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:33,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:33,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:33,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:33,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:33,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:33,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:33,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43315] to rsgroup master 2023-07-17 22:15:33,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:33,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 370 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58158 deadline: 1689633333507, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. 2023-07-17 22:15:33,508 WARN [Listener at localhost/37695] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:33,512 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:33,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:33,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:33,513 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:41625, jenkins-hbase4.apache.org:42021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:33,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:33,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:33,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:33,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:33,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_2057972009 2023-07-17 22:15:33,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2057972009 2023-07-17 22:15:33,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:33,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:33,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:33,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:33,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:33,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:33,534 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34647] to rsgroup Group_testMultiTableMove_2057972009 2023-07-17 22:15:33,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2057972009 2023-07-17 22:15:33,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:33,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:33,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:33,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-17 22:15:33,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34647,1689632118064] are moved back to default 2023-07-17 22:15:33,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_2057972009 2023-07-17 22:15:33,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:33,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:33,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:33,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_2057972009 2023-07-17 22:15:33,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:33,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:33,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-17 22:15:33,554 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 22:15:33,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 94 2023-07-17 22:15:33,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-17 22:15:33,557 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2057972009 2023-07-17 22:15:33,558 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:33,558 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:33,565 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:33,571 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 22:15:33,573 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:33,574 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec empty. 2023-07-17 22:15:33,574 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:33,574 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-17 22:15:33,606 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:33,607 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => e253d7051f40d48a93dbea74a001beec, NAME => 'GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:33,624 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:33,624 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing e253d7051f40d48a93dbea74a001beec, disabling compactions & flushes 2023-07-17 22:15:33,624 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. 2023-07-17 22:15:33,624 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. 2023-07-17 22:15:33,624 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. after waiting 0 ms 2023-07-17 22:15:33,624 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. 2023-07-17 22:15:33,624 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. 2023-07-17 22:15:33,624 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for e253d7051f40d48a93dbea74a001beec: 2023-07-17 22:15:33,626 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 22:15:33,627 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689632133627"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632133627"}]},"ts":"1689632133627"} 2023-07-17 22:15:33,629 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 22:15:33,638 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 22:15:33,638 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632133638"}]},"ts":"1689632133638"} 2023-07-17 22:15:33,640 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-17 22:15:33,643 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:33,644 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:33,644 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:33,644 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:33,644 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:33,644 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e253d7051f40d48a93dbea74a001beec, ASSIGN}] 2023-07-17 22:15:33,646 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e253d7051f40d48a93dbea74a001beec, ASSIGN 2023-07-17 22:15:33,647 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e253d7051f40d48a93dbea74a001beec, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41625,1689632118141; forceNewPlan=false, retain=false 2023-07-17 22:15:33,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-17 22:15:33,797 INFO [jenkins-hbase4:43315] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 22:15:33,799 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=e253d7051f40d48a93dbea74a001beec, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:33,799 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689632133799"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632133799"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632133799"}]},"ts":"1689632133799"} 2023-07-17 22:15:33,801 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure e253d7051f40d48a93dbea74a001beec, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:33,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-17 22:15:33,957 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. 2023-07-17 22:15:33,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e253d7051f40d48a93dbea74a001beec, NAME => 'GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:33,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:33,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:33,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:33,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:33,958 INFO [StoreOpener-e253d7051f40d48a93dbea74a001beec-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:33,960 DEBUG [StoreOpener-e253d7051f40d48a93dbea74a001beec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec/f 2023-07-17 22:15:33,960 DEBUG [StoreOpener-e253d7051f40d48a93dbea74a001beec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec/f 2023-07-17 22:15:33,960 INFO [StoreOpener-e253d7051f40d48a93dbea74a001beec-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e253d7051f40d48a93dbea74a001beec columnFamilyName f 2023-07-17 22:15:33,961 INFO [StoreOpener-e253d7051f40d48a93dbea74a001beec-1] regionserver.HStore(310): Store=e253d7051f40d48a93dbea74a001beec/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:33,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:33,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:33,965 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:33,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:33,968 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e253d7051f40d48a93dbea74a001beec; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11757607840, jitterRate=0.09501256048679352}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:33,968 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e253d7051f40d48a93dbea74a001beec: 2023-07-17 22:15:33,969 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec., pid=96, masterSystemTime=1689632133953 2023-07-17 22:15:33,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. 2023-07-17 22:15:33,970 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. 2023-07-17 22:15:33,971 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=e253d7051f40d48a93dbea74a001beec, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:33,971 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689632133971"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632133971"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632133971"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632133971"}]},"ts":"1689632133971"} 2023-07-17 22:15:33,976 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-17 22:15:33,976 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure e253d7051f40d48a93dbea74a001beec, server=jenkins-hbase4.apache.org,41625,1689632118141 in 172 msec 2023-07-17 22:15:33,978 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-17 22:15:33,978 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e253d7051f40d48a93dbea74a001beec, ASSIGN in 332 msec 2023-07-17 22:15:33,979 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 22:15:33,979 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632133979"}]},"ts":"1689632133979"} 2023-07-17 22:15:33,980 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-17 22:15:33,983 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 22:15:33,984 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 434 msec 2023-07-17 22:15:34,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-17 22:15:34,160 INFO [Listener at localhost/37695] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 94 completed 2023-07-17 22:15:34,160 DEBUG [Listener at localhost/37695] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-17 22:15:34,160 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:34,164 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-17 22:15:34,164 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:34,164 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-17 22:15:34,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:34,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-17 22:15:34,169 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 22:15:34,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 97 2023-07-17 22:15:34,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-17 22:15:34,172 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2057972009 2023-07-17 22:15:34,172 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:34,173 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:34,173 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:34,176 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 22:15:34,177 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:34,178 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777 empty. 2023-07-17 22:15:34,179 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:34,179 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-17 22:15:34,195 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:34,202 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6d59b59e81f4a919859005bb927a5777, NAME => 'GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:34,387 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:34,387 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 6d59b59e81f4a919859005bb927a5777, disabling compactions & flushes 2023-07-17 22:15:34,387 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. 2023-07-17 22:15:34,387 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. 2023-07-17 22:15:34,387 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. after waiting 0 ms 2023-07-17 22:15:34,387 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. 2023-07-17 22:15:34,387 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. 2023-07-17 22:15:34,387 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 6d59b59e81f4a919859005bb927a5777: 2023-07-17 22:15:34,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-17 22:15:34,392 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 22:15:34,393 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689632134393"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632134393"}]},"ts":"1689632134393"} 2023-07-17 22:15:34,395 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 22:15:34,396 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 22:15:34,397 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632134397"}]},"ts":"1689632134397"} 2023-07-17 22:15:34,398 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-17 22:15:34,403 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:34,403 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:34,403 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:34,403 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:34,403 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:34,403 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6d59b59e81f4a919859005bb927a5777, ASSIGN}] 2023-07-17 22:15:34,407 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6d59b59e81f4a919859005bb927a5777, ASSIGN 2023-07-17 22:15:34,408 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6d59b59e81f4a919859005bb927a5777, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41625,1689632118141; forceNewPlan=false, retain=false 2023-07-17 22:15:34,558 INFO [jenkins-hbase4:43315] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 22:15:34,560 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=6d59b59e81f4a919859005bb927a5777, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:34,560 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689632134560"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632134560"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632134560"}]},"ts":"1689632134560"} 2023-07-17 22:15:34,561 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 6d59b59e81f4a919859005bb927a5777, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:34,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-17 22:15:34,717 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. 2023-07-17 22:15:34,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6d59b59e81f4a919859005bb927a5777, NAME => 'GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:34,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:34,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:34,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:34,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:34,719 INFO [StoreOpener-6d59b59e81f4a919859005bb927a5777-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:34,721 DEBUG [StoreOpener-6d59b59e81f4a919859005bb927a5777-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777/f 2023-07-17 22:15:34,721 DEBUG [StoreOpener-6d59b59e81f4a919859005bb927a5777-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777/f 2023-07-17 22:15:34,721 INFO [StoreOpener-6d59b59e81f4a919859005bb927a5777-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6d59b59e81f4a919859005bb927a5777 columnFamilyName f 2023-07-17 22:15:34,722 INFO [StoreOpener-6d59b59e81f4a919859005bb927a5777-1] regionserver.HStore(310): Store=6d59b59e81f4a919859005bb927a5777/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:34,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:34,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:34,726 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:34,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:34,729 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6d59b59e81f4a919859005bb927a5777; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11933610560, jitterRate=0.11140409111976624}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:34,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6d59b59e81f4a919859005bb927a5777: 2023-07-17 22:15:34,729 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777., pid=99, masterSystemTime=1689632134713 2023-07-17 22:15:34,731 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. 2023-07-17 22:15:34,731 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. 2023-07-17 22:15:34,731 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=6d59b59e81f4a919859005bb927a5777, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:34,731 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689632134731"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632134731"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632134731"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632134731"}]},"ts":"1689632134731"} 2023-07-17 22:15:34,735 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-17 22:15:34,735 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 6d59b59e81f4a919859005bb927a5777, server=jenkins-hbase4.apache.org,41625,1689632118141 in 172 msec 2023-07-17 22:15:34,736 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-17 22:15:34,736 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6d59b59e81f4a919859005bb927a5777, ASSIGN in 332 msec 2023-07-17 22:15:34,737 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 22:15:34,737 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632134737"}]},"ts":"1689632134737"} 2023-07-17 22:15:34,738 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-17 22:15:34,740 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 22:15:34,741 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 574 msec 2023-07-17 22:15:34,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-17 22:15:34,890 INFO [Listener at localhost/37695] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 97 completed 2023-07-17 22:15:34,891 DEBUG [Listener at localhost/37695] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-17 22:15:34,891 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:34,897 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-17 22:15:34,897 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:34,897 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-17 22:15:34,898 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:34,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-17 22:15:34,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 22:15:34,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-17 22:15:34,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 22:15:34,913 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_2057972009 2023-07-17 22:15:34,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_2057972009 2023-07-17 22:15:34,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2057972009 2023-07-17 22:15:34,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:34,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:34,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:34,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_2057972009 2023-07-17 22:15:34,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(345): Moving region 6d59b59e81f4a919859005bb927a5777 to RSGroup Group_testMultiTableMove_2057972009 2023-07-17 22:15:34,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6d59b59e81f4a919859005bb927a5777, REOPEN/MOVE 2023-07-17 22:15:34,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_2057972009 2023-07-17 22:15:34,926 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6d59b59e81f4a919859005bb927a5777, REOPEN/MOVE 2023-07-17 22:15:34,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(345): Moving region e253d7051f40d48a93dbea74a001beec to RSGroup Group_testMultiTableMove_2057972009 2023-07-17 22:15:34,926 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=6d59b59e81f4a919859005bb927a5777, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:34,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e253d7051f40d48a93dbea74a001beec, REOPEN/MOVE 2023-07-17 22:15:34,927 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689632134926"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632134926"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632134926"}]},"ts":"1689632134926"} 2023-07-17 22:15:34,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_2057972009, current retry=0 2023-07-17 22:15:34,928 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e253d7051f40d48a93dbea74a001beec, REOPEN/MOVE 2023-07-17 22:15:34,929 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=e253d7051f40d48a93dbea74a001beec, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:34,929 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689632134929"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632134929"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632134929"}]},"ts":"1689632134929"} 2023-07-17 22:15:34,929 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=100, state=RUNNABLE; CloseRegionProcedure 6d59b59e81f4a919859005bb927a5777, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:34,931 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=101, state=RUNNABLE; CloseRegionProcedure e253d7051f40d48a93dbea74a001beec, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:35,083 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:35,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6d59b59e81f4a919859005bb927a5777, disabling compactions & flushes 2023-07-17 22:15:35,084 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. 2023-07-17 22:15:35,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. 2023-07-17 22:15:35,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. after waiting 0 ms 2023-07-17 22:15:35,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. 2023-07-17 22:15:35,088 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:35,089 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. 2023-07-17 22:15:35,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6d59b59e81f4a919859005bb927a5777: 2023-07-17 22:15:35,089 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 6d59b59e81f4a919859005bb927a5777 move to jenkins-hbase4.apache.org,34647,1689632118064 record at close sequenceid=2 2023-07-17 22:15:35,091 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:35,091 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:35,092 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e253d7051f40d48a93dbea74a001beec, disabling compactions & flushes 2023-07-17 22:15:35,092 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. 2023-07-17 22:15:35,092 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. 2023-07-17 22:15:35,092 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. after waiting 0 ms 2023-07-17 22:15:35,092 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. 2023-07-17 22:15:35,092 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=6d59b59e81f4a919859005bb927a5777, regionState=CLOSED 2023-07-17 22:15:35,093 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689632135092"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632135092"}]},"ts":"1689632135092"} 2023-07-17 22:15:35,097 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=100 2023-07-17 22:15:35,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:35,097 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=100, state=SUCCESS; CloseRegionProcedure 6d59b59e81f4a919859005bb927a5777, server=jenkins-hbase4.apache.org,41625,1689632118141 in 166 msec 2023-07-17 22:15:35,098 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6d59b59e81f4a919859005bb927a5777, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34647,1689632118064; forceNewPlan=false, retain=false 2023-07-17 22:15:35,099 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. 2023-07-17 22:15:35,099 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e253d7051f40d48a93dbea74a001beec: 2023-07-17 22:15:35,099 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e253d7051f40d48a93dbea74a001beec move to jenkins-hbase4.apache.org,34647,1689632118064 record at close sequenceid=2 2023-07-17 22:15:35,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:35,101 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=e253d7051f40d48a93dbea74a001beec, regionState=CLOSED 2023-07-17 22:15:35,101 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689632135101"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632135101"}]},"ts":"1689632135101"} 2023-07-17 22:15:35,103 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=101 2023-07-17 22:15:35,103 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=101, state=SUCCESS; CloseRegionProcedure e253d7051f40d48a93dbea74a001beec, server=jenkins-hbase4.apache.org,41625,1689632118141 in 171 msec 2023-07-17 22:15:35,104 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e253d7051f40d48a93dbea74a001beec, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34647,1689632118064; forceNewPlan=false, retain=false 2023-07-17 22:15:35,248 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=6d59b59e81f4a919859005bb927a5777, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:35,248 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=e253d7051f40d48a93dbea74a001beec, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:35,248 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689632135248"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632135248"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632135248"}]},"ts":"1689632135248"} 2023-07-17 22:15:35,248 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689632135248"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632135248"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632135248"}]},"ts":"1689632135248"} 2023-07-17 22:15:35,250 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=100, state=RUNNABLE; OpenRegionProcedure 6d59b59e81f4a919859005bb927a5777, server=jenkins-hbase4.apache.org,34647,1689632118064}] 2023-07-17 22:15:35,252 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=101, state=RUNNABLE; OpenRegionProcedure e253d7051f40d48a93dbea74a001beec, server=jenkins-hbase4.apache.org,34647,1689632118064}] 2023-07-17 22:15:35,406 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. 2023-07-17 22:15:35,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e253d7051f40d48a93dbea74a001beec, NAME => 'GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:35,407 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:35,407 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:35,407 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:35,407 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:35,410 INFO [StoreOpener-e253d7051f40d48a93dbea74a001beec-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:35,411 DEBUG [StoreOpener-e253d7051f40d48a93dbea74a001beec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec/f 2023-07-17 22:15:35,411 DEBUG [StoreOpener-e253d7051f40d48a93dbea74a001beec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec/f 2023-07-17 22:15:35,412 INFO [StoreOpener-e253d7051f40d48a93dbea74a001beec-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e253d7051f40d48a93dbea74a001beec columnFamilyName f 2023-07-17 22:15:35,412 INFO [StoreOpener-e253d7051f40d48a93dbea74a001beec-1] regionserver.HStore(310): Store=e253d7051f40d48a93dbea74a001beec/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:35,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:35,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:35,418 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:35,419 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e253d7051f40d48a93dbea74a001beec; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10311251040, jitterRate=-0.039689913392066956}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:35,419 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e253d7051f40d48a93dbea74a001beec: 2023-07-17 22:15:35,420 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec., pid=105, masterSystemTime=1689632135402 2023-07-17 22:15:35,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. 2023-07-17 22:15:35,422 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. 2023-07-17 22:15:35,422 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. 2023-07-17 22:15:35,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6d59b59e81f4a919859005bb927a5777, NAME => 'GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:35,422 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=e253d7051f40d48a93dbea74a001beec, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:35,422 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689632135422"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632135422"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632135422"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632135422"}]},"ts":"1689632135422"} 2023-07-17 22:15:35,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:35,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:35,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:35,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:35,424 INFO [StoreOpener-6d59b59e81f4a919859005bb927a5777-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:35,425 DEBUG [StoreOpener-6d59b59e81f4a919859005bb927a5777-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777/f 2023-07-17 22:15:35,425 DEBUG [StoreOpener-6d59b59e81f4a919859005bb927a5777-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777/f 2023-07-17 22:15:35,426 INFO [StoreOpener-6d59b59e81f4a919859005bb927a5777-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6d59b59e81f4a919859005bb927a5777 columnFamilyName f 2023-07-17 22:15:35,426 INFO [StoreOpener-6d59b59e81f4a919859005bb927a5777-1] regionserver.HStore(310): Store=6d59b59e81f4a919859005bb927a5777/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:35,427 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=101 2023-07-17 22:15:35,427 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=101, state=SUCCESS; OpenRegionProcedure e253d7051f40d48a93dbea74a001beec, server=jenkins-hbase4.apache.org,34647,1689632118064 in 173 msec 2023-07-17 22:15:35,427 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:35,429 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e253d7051f40d48a93dbea74a001beec, REOPEN/MOVE in 501 msec 2023-07-17 22:15:35,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:35,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:35,433 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6d59b59e81f4a919859005bb927a5777; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10457117600, jitterRate=-0.026105031371116638}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:35,433 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6d59b59e81f4a919859005bb927a5777: 2023-07-17 22:15:35,434 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777., pid=104, masterSystemTime=1689632135402 2023-07-17 22:15:35,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. 2023-07-17 22:15:35,436 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. 2023-07-17 22:15:35,436 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=6d59b59e81f4a919859005bb927a5777, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:35,436 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689632135436"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632135436"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632135436"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632135436"}]},"ts":"1689632135436"} 2023-07-17 22:15:35,439 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=100 2023-07-17 22:15:35,439 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=100, state=SUCCESS; OpenRegionProcedure 6d59b59e81f4a919859005bb927a5777, server=jenkins-hbase4.apache.org,34647,1689632118064 in 188 msec 2023-07-17 22:15:35,441 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6d59b59e81f4a919859005bb927a5777, REOPEN/MOVE in 515 msec 2023-07-17 22:15:35,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure.ProcedureSyncWait(216): waitFor pid=100 2023-07-17 22:15:35,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_2057972009. 2023-07-17 22:15:35,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:35,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:35,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:35,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-17 22:15:35,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 22:15:35,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-17 22:15:35,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 22:15:35,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:35,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:35,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_2057972009 2023-07-17 22:15:35,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:35,948 INFO [Listener at localhost/37695] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-17 22:15:35,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-17 22:15:35,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-17 22:15:35,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-17 22:15:35,954 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632135953"}]},"ts":"1689632135953"} 2023-07-17 22:15:35,955 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-17 22:15:35,957 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-17 22:15:35,961 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e253d7051f40d48a93dbea74a001beec, UNASSIGN}] 2023-07-17 22:15:35,962 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e253d7051f40d48a93dbea74a001beec, UNASSIGN 2023-07-17 22:15:35,963 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=e253d7051f40d48a93dbea74a001beec, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:35,964 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689632135963"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632135963"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632135963"}]},"ts":"1689632135963"} 2023-07-17 22:15:35,965 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE; CloseRegionProcedure e253d7051f40d48a93dbea74a001beec, server=jenkins-hbase4.apache.org,34647,1689632118064}] 2023-07-17 22:15:36,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-17 22:15:36,117 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:36,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e253d7051f40d48a93dbea74a001beec, disabling compactions & flushes 2023-07-17 22:15:36,119 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. 2023-07-17 22:15:36,119 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. 2023-07-17 22:15:36,119 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. after waiting 0 ms 2023-07-17 22:15:36,119 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. 2023-07-17 22:15:36,122 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 22:15:36,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec. 2023-07-17 22:15:36,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e253d7051f40d48a93dbea74a001beec: 2023-07-17 22:15:36,124 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:36,125 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=e253d7051f40d48a93dbea74a001beec, regionState=CLOSED 2023-07-17 22:15:36,125 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689632136125"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632136125"}]},"ts":"1689632136125"} 2023-07-17 22:15:36,128 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-17 22:15:36,128 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; CloseRegionProcedure e253d7051f40d48a93dbea74a001beec, server=jenkins-hbase4.apache.org,34647,1689632118064 in 161 msec 2023-07-17 22:15:36,129 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-17 22:15:36,129 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e253d7051f40d48a93dbea74a001beec, UNASSIGN in 171 msec 2023-07-17 22:15:36,130 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632136130"}]},"ts":"1689632136130"} 2023-07-17 22:15:36,131 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-17 22:15:36,132 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-17 22:15:36,135 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 184 msec 2023-07-17 22:15:36,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-17 22:15:36,256 INFO [Listener at localhost/37695] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-17 22:15:36,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-17 22:15:36,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-17 22:15:36,260 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-17 22:15:36,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_2057972009' 2023-07-17 22:15:36,262 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=109, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-17 22:15:36,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2057972009 2023-07-17 22:15:36,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:36,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:36,267 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:36,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:36,269 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec/f, FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec/recovered.edits] 2023-07-17 22:15:36,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-17 22:15:36,274 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec/recovered.edits/7.seqid to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/archive/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec/recovered.edits/7.seqid 2023-07-17 22:15:36,275 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/GrouptestMultiTableMoveA/e253d7051f40d48a93dbea74a001beec 2023-07-17 22:15:36,275 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-17 22:15:36,277 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=109, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-17 22:15:36,280 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-17 22:15:36,281 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-17 22:15:36,282 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=109, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-17 22:15:36,282 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-17 22:15:36,282 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632136282"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:36,284 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-17 22:15:36,284 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => e253d7051f40d48a93dbea74a001beec, NAME => 'GrouptestMultiTableMoveA,,1689632133549.e253d7051f40d48a93dbea74a001beec.', STARTKEY => '', ENDKEY => ''}] 2023-07-17 22:15:36,284 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-17 22:15:36,284 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689632136284"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:36,285 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-17 22:15:36,288 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=109, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-17 22:15:36,289 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 31 msec 2023-07-17 22:15:36,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-17 22:15:36,371 INFO [Listener at localhost/37695] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-17 22:15:36,372 INFO [Listener at localhost/37695] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-17 22:15:36,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-17 22:15:36,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-17 22:15:36,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-17 22:15:36,376 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632136376"}]},"ts":"1689632136376"} 2023-07-17 22:15:36,377 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-17 22:15:36,379 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-17 22:15:36,379 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6d59b59e81f4a919859005bb927a5777, UNASSIGN}] 2023-07-17 22:15:36,381 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6d59b59e81f4a919859005bb927a5777, UNASSIGN 2023-07-17 22:15:36,382 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=6d59b59e81f4a919859005bb927a5777, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:36,382 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689632136382"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632136382"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632136382"}]},"ts":"1689632136382"} 2023-07-17 22:15:36,383 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure 6d59b59e81f4a919859005bb927a5777, server=jenkins-hbase4.apache.org,34647,1689632118064}] 2023-07-17 22:15:36,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-17 22:15:36,535 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:36,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6d59b59e81f4a919859005bb927a5777, disabling compactions & flushes 2023-07-17 22:15:36,536 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. 2023-07-17 22:15:36,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. 2023-07-17 22:15:36,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. after waiting 0 ms 2023-07-17 22:15:36,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. 2023-07-17 22:15:36,540 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 22:15:36,540 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777. 2023-07-17 22:15:36,541 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6d59b59e81f4a919859005bb927a5777: 2023-07-17 22:15:36,542 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:36,543 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=6d59b59e81f4a919859005bb927a5777, regionState=CLOSED 2023-07-17 22:15:36,543 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689632136543"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632136543"}]},"ts":"1689632136543"} 2023-07-17 22:15:36,545 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-17 22:15:36,546 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure 6d59b59e81f4a919859005bb927a5777, server=jenkins-hbase4.apache.org,34647,1689632118064 in 161 msec 2023-07-17 22:15:36,547 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-17 22:15:36,547 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6d59b59e81f4a919859005bb927a5777, UNASSIGN in 166 msec 2023-07-17 22:15:36,547 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632136547"}]},"ts":"1689632136547"} 2023-07-17 22:15:36,549 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-17 22:15:36,550 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-17 22:15:36,552 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 179 msec 2023-07-17 22:15:36,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-17 22:15:36,680 INFO [Listener at localhost/37695] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-17 22:15:36,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-17 22:15:36,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-17 22:15:36,683 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-17 22:15:36,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_2057972009' 2023-07-17 22:15:36,684 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-17 22:15:36,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2057972009 2023-07-17 22:15:36,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:36,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:36,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:36,689 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:36,691 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777/f, FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777/recovered.edits] 2023-07-17 22:15:36,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-17 22:15:36,699 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777/recovered.edits/7.seqid to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/archive/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777/recovered.edits/7.seqid 2023-07-17 22:15:36,700 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/GrouptestMultiTableMoveB/6d59b59e81f4a919859005bb927a5777 2023-07-17 22:15:36,700 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-17 22:15:36,703 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-17 22:15:36,705 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-17 22:15:36,707 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-17 22:15:36,708 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-17 22:15:36,708 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-17 22:15:36,708 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632136708"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:36,710 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-17 22:15:36,710 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 6d59b59e81f4a919859005bb927a5777, NAME => 'GrouptestMultiTableMoveB,,1689632134165.6d59b59e81f4a919859005bb927a5777.', STARTKEY => '', ENDKEY => ''}] 2023-07-17 22:15:36,710 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-17 22:15:36,710 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689632136710"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:36,711 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-17 22:15:36,714 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-17 22:15:36,716 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 33 msec 2023-07-17 22:15:36,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-17 22:15:36,796 INFO [Listener at localhost/37695] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-17 22:15:36,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:36,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:36,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:36,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:36,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:36,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34647] to rsgroup default 2023-07-17 22:15:36,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2057972009 2023-07-17 22:15:36,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:36,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:36,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:36,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_2057972009, current retry=0 2023-07-17 22:15:36,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34647,1689632118064] are moved back to Group_testMultiTableMove_2057972009 2023-07-17 22:15:36,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_2057972009 => default 2023-07-17 22:15:36,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:36,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_2057972009 2023-07-17 22:15:36,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:36,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:36,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-17 22:15:36,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:36,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:36,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:36,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:36,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:36,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:36,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:36,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:36,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:36,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:36,852 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:36,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:36,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:36,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:36,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:36,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:36,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:36,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:36,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43315] to rsgroup master 2023-07-17 22:15:36,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:36,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 508 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58158 deadline: 1689633336867, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. 2023-07-17 22:15:36,869 WARN [Listener at localhost/37695] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:36,871 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:36,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:36,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:36,872 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:41625, jenkins-hbase4.apache.org:42021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:36,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:36,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:36,892 INFO [Listener at localhost/37695] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=509 (was 510), OpenFileDescriptor=787 (was 787), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=349 (was 353), ProcessCount=172 (was 172), AvailableMemoryMB=2973 (was 3152) 2023-07-17 22:15:36,892 WARN [Listener at localhost/37695] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-17 22:15:36,910 INFO [Listener at localhost/37695] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=509, OpenFileDescriptor=787, MaxFileDescriptor=60000, SystemLoadAverage=349, ProcessCount=172, AvailableMemoryMB=2973 2023-07-17 22:15:36,910 WARN [Listener at localhost/37695] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-17 22:15:36,910 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-17 22:15:36,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:36,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:36,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:36,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:36,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:36,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:36,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:36,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:36,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:36,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:36,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:36,928 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:36,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:36,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:36,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:36,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:36,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:36,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:36,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:36,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43315] to rsgroup master 2023-07-17 22:15:36,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:36,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 536 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58158 deadline: 1689633336941, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. 2023-07-17 22:15:36,941 WARN [Listener at localhost/37695] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:36,943 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:36,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:36,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:36,944 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:41625, jenkins-hbase4.apache.org:42021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:36,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:36,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:36,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:36,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:36,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-17 22:15:36,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:36,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-17 22:15:36,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:36,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:36,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:36,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:36,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:36,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:34647] to rsgroup oldGroup 2023-07-17 22:15:36,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:36,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-17 22:15:36,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:36,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:36,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-17 22:15:36,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34647,1689632118064, jenkins-hbase4.apache.org,34803,1689632122825] are moved back to default 2023-07-17 22:15:36,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-17 22:15:36,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:36,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:36,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:36,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-17 22:15:36,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:36,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-17 22:15:36,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:36,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:36,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:36,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-17 22:15:36,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:36,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-17 22:15:36,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-17 22:15:36,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:36,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 22:15:36,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:36,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:36,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:36,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41625] to rsgroup anotherRSGroup 2023-07-17 22:15:36,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:36,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-17 22:15:36,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-17 22:15:36,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:36,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 22:15:36,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-17 22:15:36,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41625,1689632118141] are moved back to default 2023-07-17 22:15:36,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-17 22:15:36,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:37,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:37,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:37,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-17 22:15:37,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:37,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-17 22:15:37,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:37,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-17 22:15:37,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:37,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 570 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:58158 deadline: 1689633337010, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-17 22:15:37,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-17 22:15:37,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:37,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:58158 deadline: 1689633337014, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-17 22:15:37,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-17 22:15:37,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:37,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:58158 deadline: 1689633337015, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-17 22:15:37,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-17 22:15:37,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:37,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:58158 deadline: 1689633337016, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-17 22:15:37,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:37,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:37,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:37,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:37,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:37,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41625] to rsgroup default 2023-07-17 22:15:37,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:37,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-17 22:15:37,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-17 22:15:37,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:37,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 22:15:37,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-17 22:15:37,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41625,1689632118141] are moved back to anotherRSGroup 2023-07-17 22:15:37,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-17 22:15:37,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:37,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-17 22:15:37,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:37,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-17 22:15:37,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:37,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-17 22:15:37,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:37,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:37,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:37,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:37,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:34647] to rsgroup default 2023-07-17 22:15:37,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:37,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-17 22:15:37,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:37,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:37,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-17 22:15:37,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34647,1689632118064, jenkins-hbase4.apache.org,34803,1689632122825] are moved back to oldGroup 2023-07-17 22:15:37,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-17 22:15:37,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:37,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-17 22:15:37,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:37,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:37,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-17 22:15:37,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:37,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:37,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:37,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:37,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:37,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:37,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:37,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:37,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:37,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:37,062 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:37,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:37,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:37,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:37,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:37,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:37,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:37,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:37,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43315] to rsgroup master 2023-07-17 22:15:37,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:37,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 612 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58158 deadline: 1689633337073, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. 2023-07-17 22:15:37,074 WARN [Listener at localhost/37695] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:37,075 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:37,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:37,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:37,076 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:41625, jenkins-hbase4.apache.org:42021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:37,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:37,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:37,094 INFO [Listener at localhost/37695] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=512 (was 509) Potentially hanging thread: hconnection-0x63551a-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63551a-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63551a-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63551a-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=787 (was 787), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=349 (was 349), ProcessCount=172 (was 172), AvailableMemoryMB=2973 (was 2973) 2023-07-17 22:15:37,094 WARN [Listener at localhost/37695] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-17 22:15:37,111 INFO [Listener at localhost/37695] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=512, OpenFileDescriptor=787, MaxFileDescriptor=60000, SystemLoadAverage=349, ProcessCount=172, AvailableMemoryMB=2973 2023-07-17 22:15:37,111 WARN [Listener at localhost/37695] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-17 22:15:37,111 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-17 22:15:37,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:37,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:37,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:37,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:37,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:37,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:37,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:37,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:37,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:37,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:37,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:37,125 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:37,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:37,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:37,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:37,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:37,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:37,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:37,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:37,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43315] to rsgroup master 2023-07-17 22:15:37,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:37,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 640 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58158 deadline: 1689633337139, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. 2023-07-17 22:15:37,139 WARN [Listener at localhost/37695] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:37,142 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:37,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:37,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:37,144 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:41625, jenkins-hbase4.apache.org:42021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:37,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:37,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:37,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:37,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:37,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-17 22:15:37,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-17 22:15:37,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:37,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:37,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:37,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:37,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:37,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:37,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:34647] to rsgroup oldgroup 2023-07-17 22:15:37,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-17 22:15:37,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:37,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:37,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:37,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-17 22:15:37,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34647,1689632118064, jenkins-hbase4.apache.org,34803,1689632122825] are moved back to default 2023-07-17 22:15:37,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-17 22:15:37,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:37,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:37,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:37,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-17 22:15:37,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:37,185 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:37,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-17 22:15:37,188 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 22:15:37,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 114 2023-07-17 22:15:37,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-17 22:15:37,190 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-17 22:15:37,190 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:37,191 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:37,192 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:37,195 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 22:15:37,197 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/testRename/2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:37,197 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/testRename/2b34f0020745232a8a57d9007f0d3248 empty. 2023-07-17 22:15:37,198 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/testRename/2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:37,198 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-17 22:15:37,228 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:37,229 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2b34f0020745232a8a57d9007f0d3248, NAME => 'testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:37,254 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:37,254 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 2b34f0020745232a8a57d9007f0d3248, disabling compactions & flushes 2023-07-17 22:15:37,254 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:37,254 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:37,254 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. after waiting 0 ms 2023-07-17 22:15:37,254 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:37,254 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:37,254 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 2b34f0020745232a8a57d9007f0d3248: 2023-07-17 22:15:37,257 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 22:15:37,259 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689632137259"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632137259"}]},"ts":"1689632137259"} 2023-07-17 22:15:37,261 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 22:15:37,261 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 22:15:37,262 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632137262"}]},"ts":"1689632137262"} 2023-07-17 22:15:37,263 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-17 22:15:37,269 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:37,269 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:37,269 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:37,270 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:37,270 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=2b34f0020745232a8a57d9007f0d3248, ASSIGN}] 2023-07-17 22:15:37,272 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=2b34f0020745232a8a57d9007f0d3248, ASSIGN 2023-07-17 22:15:37,273 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=2b34f0020745232a8a57d9007f0d3248, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42021,1689632117931; forceNewPlan=false, retain=false 2023-07-17 22:15:37,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-17 22:15:37,424 INFO [jenkins-hbase4:43315] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 22:15:37,425 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=2b34f0020745232a8a57d9007f0d3248, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:37,425 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689632137425"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632137425"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632137425"}]},"ts":"1689632137425"} 2023-07-17 22:15:37,427 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure 2b34f0020745232a8a57d9007f0d3248, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:37,487 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-17 22:15:37,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-17 22:15:37,582 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:37,582 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2b34f0020745232a8a57d9007f0d3248, NAME => 'testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:37,582 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:37,583 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:37,583 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:37,583 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:37,584 INFO [StoreOpener-2b34f0020745232a8a57d9007f0d3248-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:37,585 DEBUG [StoreOpener-2b34f0020745232a8a57d9007f0d3248-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/testRename/2b34f0020745232a8a57d9007f0d3248/tr 2023-07-17 22:15:37,585 DEBUG [StoreOpener-2b34f0020745232a8a57d9007f0d3248-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/testRename/2b34f0020745232a8a57d9007f0d3248/tr 2023-07-17 22:15:37,585 INFO [StoreOpener-2b34f0020745232a8a57d9007f0d3248-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2b34f0020745232a8a57d9007f0d3248 columnFamilyName tr 2023-07-17 22:15:37,586 INFO [StoreOpener-2b34f0020745232a8a57d9007f0d3248-1] regionserver.HStore(310): Store=2b34f0020745232a8a57d9007f0d3248/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:37,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/testRename/2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:37,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/testRename/2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:37,589 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:37,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/testRename/2b34f0020745232a8a57d9007f0d3248/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:37,592 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2b34f0020745232a8a57d9007f0d3248; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9416112000, jitterRate=-0.12305623292922974}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:37,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2b34f0020745232a8a57d9007f0d3248: 2023-07-17 22:15:37,593 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248., pid=116, masterSystemTime=1689632137578 2023-07-17 22:15:37,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:37,594 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:37,595 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=2b34f0020745232a8a57d9007f0d3248, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:37,595 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689632137595"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632137595"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632137595"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632137595"}]},"ts":"1689632137595"} 2023-07-17 22:15:37,598 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-17 22:15:37,598 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure 2b34f0020745232a8a57d9007f0d3248, server=jenkins-hbase4.apache.org,42021,1689632117931 in 169 msec 2023-07-17 22:15:37,599 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-17 22:15:37,599 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=2b34f0020745232a8a57d9007f0d3248, ASSIGN in 328 msec 2023-07-17 22:15:37,600 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 22:15:37,600 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632137600"}]},"ts":"1689632137600"} 2023-07-17 22:15:37,601 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-17 22:15:37,603 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 22:15:37,604 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=testRename in 418 msec 2023-07-17 22:15:37,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-17 22:15:37,793 INFO [Listener at localhost/37695] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 114 completed 2023-07-17 22:15:37,793 DEBUG [Listener at localhost/37695] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-17 22:15:37,793 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:37,796 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-17 22:15:37,797 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:37,797 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-17 22:15:37,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-17 22:15:37,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-17 22:15:37,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:37,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:37,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:37,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-17 22:15:37,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(345): Moving region 2b34f0020745232a8a57d9007f0d3248 to RSGroup oldgroup 2023-07-17 22:15:37,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:37,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:37,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:37,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:37,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:37,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=2b34f0020745232a8a57d9007f0d3248, REOPEN/MOVE 2023-07-17 22:15:37,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-17 22:15:37,805 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=2b34f0020745232a8a57d9007f0d3248, REOPEN/MOVE 2023-07-17 22:15:37,806 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=2b34f0020745232a8a57d9007f0d3248, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:37,806 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689632137806"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632137806"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632137806"}]},"ts":"1689632137806"} 2023-07-17 22:15:37,807 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure 2b34f0020745232a8a57d9007f0d3248, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:37,960 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:37,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2b34f0020745232a8a57d9007f0d3248, disabling compactions & flushes 2023-07-17 22:15:37,961 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:37,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:37,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. after waiting 0 ms 2023-07-17 22:15:37,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:37,965 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/testRename/2b34f0020745232a8a57d9007f0d3248/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:37,966 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:37,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2b34f0020745232a8a57d9007f0d3248: 2023-07-17 22:15:37,966 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2b34f0020745232a8a57d9007f0d3248 move to jenkins-hbase4.apache.org,34803,1689632122825 record at close sequenceid=2 2023-07-17 22:15:37,967 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:37,968 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=2b34f0020745232a8a57d9007f0d3248, regionState=CLOSED 2023-07-17 22:15:37,968 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689632137968"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632137968"}]},"ts":"1689632137968"} 2023-07-17 22:15:37,971 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-17 22:15:37,971 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure 2b34f0020745232a8a57d9007f0d3248, server=jenkins-hbase4.apache.org,42021,1689632117931 in 162 msec 2023-07-17 22:15:37,971 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=2b34f0020745232a8a57d9007f0d3248, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34803,1689632122825; forceNewPlan=false, retain=false 2023-07-17 22:15:38,121 INFO [jenkins-hbase4:43315] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 22:15:38,122 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=2b34f0020745232a8a57d9007f0d3248, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:38,122 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689632138122"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632138122"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632138122"}]},"ts":"1689632138122"} 2023-07-17 22:15:38,124 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure 2b34f0020745232a8a57d9007f0d3248, server=jenkins-hbase4.apache.org,34803,1689632122825}] 2023-07-17 22:15:38,284 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:38,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2b34f0020745232a8a57d9007f0d3248, NAME => 'testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:38,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:38,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:38,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:38,285 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:38,286 INFO [StoreOpener-2b34f0020745232a8a57d9007f0d3248-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:38,287 DEBUG [StoreOpener-2b34f0020745232a8a57d9007f0d3248-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/testRename/2b34f0020745232a8a57d9007f0d3248/tr 2023-07-17 22:15:38,287 DEBUG [StoreOpener-2b34f0020745232a8a57d9007f0d3248-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/testRename/2b34f0020745232a8a57d9007f0d3248/tr 2023-07-17 22:15:38,288 INFO [StoreOpener-2b34f0020745232a8a57d9007f0d3248-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2b34f0020745232a8a57d9007f0d3248 columnFamilyName tr 2023-07-17 22:15:38,288 INFO [StoreOpener-2b34f0020745232a8a57d9007f0d3248-1] regionserver.HStore(310): Store=2b34f0020745232a8a57d9007f0d3248/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:38,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/testRename/2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:38,291 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/testRename/2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:38,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:38,295 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2b34f0020745232a8a57d9007f0d3248; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10382890720, jitterRate=-0.033017948269844055}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:38,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2b34f0020745232a8a57d9007f0d3248: 2023-07-17 22:15:38,295 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248., pid=119, masterSystemTime=1689632138276 2023-07-17 22:15:38,297 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:38,297 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:38,297 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=2b34f0020745232a8a57d9007f0d3248, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:38,297 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689632138297"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632138297"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632138297"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632138297"}]},"ts":"1689632138297"} 2023-07-17 22:15:38,300 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-17 22:15:38,300 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure 2b34f0020745232a8a57d9007f0d3248, server=jenkins-hbase4.apache.org,34803,1689632122825 in 175 msec 2023-07-17 22:15:38,301 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=2b34f0020745232a8a57d9007f0d3248, REOPEN/MOVE in 496 msec 2023-07-17 22:15:38,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-17 22:15:38,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-17 22:15:38,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:38,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:38,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:38,812 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:38,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-17 22:15:38,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 22:15:38,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-17 22:15:38,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:38,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-17 22:15:38,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 22:15:38,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:38,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:38,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-17 22:15:38,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-17 22:15:38,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-17 22:15:38,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:38,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:38,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 22:15:38,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:38,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:38,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:38,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41625] to rsgroup normal 2023-07-17 22:15:38,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-17 22:15:38,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-17 22:15:38,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:38,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:38,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 22:15:38,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-17 22:15:38,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41625,1689632118141] are moved back to default 2023-07-17 22:15:38,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-17 22:15:38,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:38,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:38,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:38,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-17 22:15:38,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:38,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:38,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-17 22:15:38,850 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 22:15:38,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 120 2023-07-17 22:15:38,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-17 22:15:38,852 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-17 22:15:38,853 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-17 22:15:38,853 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:38,854 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:38,854 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 22:15:38,856 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 22:15:38,858 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/unmovedTable/26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:38,858 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/unmovedTable/26133ff4db8e1a874ad4b8256a8d5ff5 empty. 2023-07-17 22:15:38,859 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/unmovedTable/26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:38,859 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-17 22:15:38,877 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:38,879 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 26133ff4db8e1a874ad4b8256a8d5ff5, NAME => 'unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:38,897 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:38,897 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 26133ff4db8e1a874ad4b8256a8d5ff5, disabling compactions & flushes 2023-07-17 22:15:38,897 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:38,897 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:38,897 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. after waiting 0 ms 2023-07-17 22:15:38,897 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:38,897 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:38,897 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 26133ff4db8e1a874ad4b8256a8d5ff5: 2023-07-17 22:15:38,900 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 22:15:38,901 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689632138901"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632138901"}]},"ts":"1689632138901"} 2023-07-17 22:15:38,902 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 22:15:38,903 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 22:15:38,903 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632138903"}]},"ts":"1689632138903"} 2023-07-17 22:15:38,904 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-17 22:15:38,909 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=26133ff4db8e1a874ad4b8256a8d5ff5, ASSIGN}] 2023-07-17 22:15:38,911 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=26133ff4db8e1a874ad4b8256a8d5ff5, ASSIGN 2023-07-17 22:15:38,912 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=26133ff4db8e1a874ad4b8256a8d5ff5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42021,1689632117931; forceNewPlan=false, retain=false 2023-07-17 22:15:38,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-17 22:15:39,063 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=26133ff4db8e1a874ad4b8256a8d5ff5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:39,064 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689632139063"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632139063"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632139063"}]},"ts":"1689632139063"} 2023-07-17 22:15:39,065 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure 26133ff4db8e1a874ad4b8256a8d5ff5, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:39,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-17 22:15:39,220 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:39,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 26133ff4db8e1a874ad4b8256a8d5ff5, NAME => 'unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:39,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:39,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:39,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:39,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:39,223 INFO [StoreOpener-26133ff4db8e1a874ad4b8256a8d5ff5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:39,224 DEBUG [StoreOpener-26133ff4db8e1a874ad4b8256a8d5ff5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/unmovedTable/26133ff4db8e1a874ad4b8256a8d5ff5/ut 2023-07-17 22:15:39,224 DEBUG [StoreOpener-26133ff4db8e1a874ad4b8256a8d5ff5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/unmovedTable/26133ff4db8e1a874ad4b8256a8d5ff5/ut 2023-07-17 22:15:39,225 INFO [StoreOpener-26133ff4db8e1a874ad4b8256a8d5ff5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 26133ff4db8e1a874ad4b8256a8d5ff5 columnFamilyName ut 2023-07-17 22:15:39,225 INFO [StoreOpener-26133ff4db8e1a874ad4b8256a8d5ff5-1] regionserver.HStore(310): Store=26133ff4db8e1a874ad4b8256a8d5ff5/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:39,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/unmovedTable/26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:39,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/unmovedTable/26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:39,229 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:39,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/unmovedTable/26133ff4db8e1a874ad4b8256a8d5ff5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:39,231 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 26133ff4db8e1a874ad4b8256a8d5ff5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10687264000, jitterRate=-0.004670977592468262}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:39,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 26133ff4db8e1a874ad4b8256a8d5ff5: 2023-07-17 22:15:39,232 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5., pid=122, masterSystemTime=1689632139217 2023-07-17 22:15:39,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:39,233 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:39,234 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=26133ff4db8e1a874ad4b8256a8d5ff5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:39,234 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689632139234"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632139234"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632139234"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632139234"}]},"ts":"1689632139234"} 2023-07-17 22:15:39,236 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-17 22:15:39,237 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure 26133ff4db8e1a874ad4b8256a8d5ff5, server=jenkins-hbase4.apache.org,42021,1689632117931 in 170 msec 2023-07-17 22:15:39,238 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-17 22:15:39,238 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=26133ff4db8e1a874ad4b8256a8d5ff5, ASSIGN in 328 msec 2023-07-17 22:15:39,239 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 22:15:39,239 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632139239"}]},"ts":"1689632139239"} 2023-07-17 22:15:39,240 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-17 22:15:39,242 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 22:15:39,243 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=unmovedTable in 395 msec 2023-07-17 22:15:39,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-17 22:15:39,455 INFO [Listener at localhost/37695] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 120 completed 2023-07-17 22:15:39,455 DEBUG [Listener at localhost/37695] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-17 22:15:39,455 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:39,458 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-17 22:15:39,459 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:39,459 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-17 22:15:39,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-17 22:15:39,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-17 22:15:39,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-17 22:15:39,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:39,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:39,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 22:15:39,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-17 22:15:39,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(345): Moving region 26133ff4db8e1a874ad4b8256a8d5ff5 to RSGroup normal 2023-07-17 22:15:39,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=26133ff4db8e1a874ad4b8256a8d5ff5, REOPEN/MOVE 2023-07-17 22:15:39,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-17 22:15:39,466 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=26133ff4db8e1a874ad4b8256a8d5ff5, REOPEN/MOVE 2023-07-17 22:15:39,467 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=26133ff4db8e1a874ad4b8256a8d5ff5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:39,467 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689632139467"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632139467"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632139467"}]},"ts":"1689632139467"} 2023-07-17 22:15:39,468 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure 26133ff4db8e1a874ad4b8256a8d5ff5, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:39,621 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:39,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 26133ff4db8e1a874ad4b8256a8d5ff5, disabling compactions & flushes 2023-07-17 22:15:39,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:39,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:39,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. after waiting 0 ms 2023-07-17 22:15:39,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:39,626 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/unmovedTable/26133ff4db8e1a874ad4b8256a8d5ff5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:39,627 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:39,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 26133ff4db8e1a874ad4b8256a8d5ff5: 2023-07-17 22:15:39,627 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 26133ff4db8e1a874ad4b8256a8d5ff5 move to jenkins-hbase4.apache.org,41625,1689632118141 record at close sequenceid=2 2023-07-17 22:15:39,628 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:39,629 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=26133ff4db8e1a874ad4b8256a8d5ff5, regionState=CLOSED 2023-07-17 22:15:39,629 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689632139629"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632139629"}]},"ts":"1689632139629"} 2023-07-17 22:15:39,631 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-17 22:15:39,631 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure 26133ff4db8e1a874ad4b8256a8d5ff5, server=jenkins-hbase4.apache.org,42021,1689632117931 in 162 msec 2023-07-17 22:15:39,632 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=26133ff4db8e1a874ad4b8256a8d5ff5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41625,1689632118141; forceNewPlan=false, retain=false 2023-07-17 22:15:39,783 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=26133ff4db8e1a874ad4b8256a8d5ff5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:39,783 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689632139782"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632139782"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632139782"}]},"ts":"1689632139782"} 2023-07-17 22:15:39,784 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure 26133ff4db8e1a874ad4b8256a8d5ff5, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:39,940 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:39,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 26133ff4db8e1a874ad4b8256a8d5ff5, NAME => 'unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:39,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:39,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:39,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:39,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:39,942 INFO [StoreOpener-26133ff4db8e1a874ad4b8256a8d5ff5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:39,943 DEBUG [StoreOpener-26133ff4db8e1a874ad4b8256a8d5ff5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/unmovedTable/26133ff4db8e1a874ad4b8256a8d5ff5/ut 2023-07-17 22:15:39,943 DEBUG [StoreOpener-26133ff4db8e1a874ad4b8256a8d5ff5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/unmovedTable/26133ff4db8e1a874ad4b8256a8d5ff5/ut 2023-07-17 22:15:39,944 INFO [StoreOpener-26133ff4db8e1a874ad4b8256a8d5ff5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 26133ff4db8e1a874ad4b8256a8d5ff5 columnFamilyName ut 2023-07-17 22:15:39,944 INFO [StoreOpener-26133ff4db8e1a874ad4b8256a8d5ff5-1] regionserver.HStore(310): Store=26133ff4db8e1a874ad4b8256a8d5ff5/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:39,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/unmovedTable/26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:39,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/unmovedTable/26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:39,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:39,951 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 26133ff4db8e1a874ad4b8256a8d5ff5; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10200723200, jitterRate=-0.04998362064361572}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:39,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 26133ff4db8e1a874ad4b8256a8d5ff5: 2023-07-17 22:15:39,952 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5., pid=125, masterSystemTime=1689632139936 2023-07-17 22:15:39,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:39,954 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:39,954 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=26133ff4db8e1a874ad4b8256a8d5ff5, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:39,954 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689632139954"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632139954"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632139954"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632139954"}]},"ts":"1689632139954"} 2023-07-17 22:15:39,959 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-17 22:15:39,959 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure 26133ff4db8e1a874ad4b8256a8d5ff5, server=jenkins-hbase4.apache.org,41625,1689632118141 in 171 msec 2023-07-17 22:15:39,960 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=26133ff4db8e1a874ad4b8256a8d5ff5, REOPEN/MOVE in 493 msec 2023-07-17 22:15:40,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-17 22:15:40,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-17 22:15:40,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:40,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:40,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:40,472 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:40,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-17 22:15:40,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 22:15:40,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-17 22:15:40,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:40,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-17 22:15:40,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 22:15:40,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-17 22:15:40,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-17 22:15:40,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:40,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:40,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-17 22:15:40,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-17 22:15:40,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-17 22:15:40,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:40,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:40,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-17 22:15:40,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:40,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-17 22:15:40,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 22:15:40,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-17 22:15:40,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 22:15:40,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:40,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:40,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-17 22:15:40,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-17 22:15:40,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:40,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:40,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-17 22:15:40,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 22:15:40,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-17 22:15:40,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(345): Moving region 26133ff4db8e1a874ad4b8256a8d5ff5 to RSGroup default 2023-07-17 22:15:40,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=26133ff4db8e1a874ad4b8256a8d5ff5, REOPEN/MOVE 2023-07-17 22:15:40,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-17 22:15:40,503 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=26133ff4db8e1a874ad4b8256a8d5ff5, REOPEN/MOVE 2023-07-17 22:15:40,504 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=26133ff4db8e1a874ad4b8256a8d5ff5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:40,504 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689632140504"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632140504"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632140504"}]},"ts":"1689632140504"} 2023-07-17 22:15:40,505 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 26133ff4db8e1a874ad4b8256a8d5ff5, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:40,658 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:40,659 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 26133ff4db8e1a874ad4b8256a8d5ff5, disabling compactions & flushes 2023-07-17 22:15:40,660 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:40,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:40,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. after waiting 0 ms 2023-07-17 22:15:40,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:40,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/unmovedTable/26133ff4db8e1a874ad4b8256a8d5ff5/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 22:15:40,664 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:40,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 26133ff4db8e1a874ad4b8256a8d5ff5: 2023-07-17 22:15:40,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 26133ff4db8e1a874ad4b8256a8d5ff5 move to jenkins-hbase4.apache.org,42021,1689632117931 record at close sequenceid=5 2023-07-17 22:15:40,666 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:40,667 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=26133ff4db8e1a874ad4b8256a8d5ff5, regionState=CLOSED 2023-07-17 22:15:40,667 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689632140667"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632140667"}]},"ts":"1689632140667"} 2023-07-17 22:15:40,671 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-17 22:15:40,671 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 26133ff4db8e1a874ad4b8256a8d5ff5, server=jenkins-hbase4.apache.org,41625,1689632118141 in 164 msec 2023-07-17 22:15:40,673 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=26133ff4db8e1a874ad4b8256a8d5ff5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42021,1689632117931; forceNewPlan=false, retain=false 2023-07-17 22:15:40,823 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=26133ff4db8e1a874ad4b8256a8d5ff5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:40,824 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689632140823"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632140823"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632140823"}]},"ts":"1689632140823"} 2023-07-17 22:15:40,827 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 26133ff4db8e1a874ad4b8256a8d5ff5, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:40,971 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-17 22:15:40,983 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:40,983 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 26133ff4db8e1a874ad4b8256a8d5ff5, NAME => 'unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:40,983 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:40,983 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:40,983 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:40,983 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:40,984 INFO [StoreOpener-26133ff4db8e1a874ad4b8256a8d5ff5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:40,985 DEBUG [StoreOpener-26133ff4db8e1a874ad4b8256a8d5ff5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/unmovedTable/26133ff4db8e1a874ad4b8256a8d5ff5/ut 2023-07-17 22:15:40,986 DEBUG [StoreOpener-26133ff4db8e1a874ad4b8256a8d5ff5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/unmovedTable/26133ff4db8e1a874ad4b8256a8d5ff5/ut 2023-07-17 22:15:40,986 INFO [StoreOpener-26133ff4db8e1a874ad4b8256a8d5ff5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 26133ff4db8e1a874ad4b8256a8d5ff5 columnFamilyName ut 2023-07-17 22:15:40,986 INFO [StoreOpener-26133ff4db8e1a874ad4b8256a8d5ff5-1] regionserver.HStore(310): Store=26133ff4db8e1a874ad4b8256a8d5ff5/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:40,987 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/unmovedTable/26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:40,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/unmovedTable/26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:40,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:40,993 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 26133ff4db8e1a874ad4b8256a8d5ff5; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10505624480, jitterRate=-0.021587476134300232}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:40,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 26133ff4db8e1a874ad4b8256a8d5ff5: 2023-07-17 22:15:40,994 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5., pid=128, masterSystemTime=1689632140979 2023-07-17 22:15:40,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:40,996 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:40,996 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=26133ff4db8e1a874ad4b8256a8d5ff5, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:40,996 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689632140996"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632140996"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632140996"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632140996"}]},"ts":"1689632140996"} 2023-07-17 22:15:40,999 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-17 22:15:41,000 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 26133ff4db8e1a874ad4b8256a8d5ff5, server=jenkins-hbase4.apache.org,42021,1689632117931 in 171 msec 2023-07-17 22:15:41,001 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=26133ff4db8e1a874ad4b8256a8d5ff5, REOPEN/MOVE in 498 msec 2023-07-17 22:15:41,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-17 22:15:41,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-17 22:15:41,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:41,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41625] to rsgroup default 2023-07-17 22:15:41,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-17 22:15:41,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:41,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:41,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-17 22:15:41,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 22:15:41,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-17 22:15:41,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41625,1689632118141] are moved back to normal 2023-07-17 22:15:41,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-17 22:15:41,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:41,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-17 22:15:41,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:41,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:41,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-17 22:15:41,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-17 22:15:41,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:41,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:41,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:41,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:41,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:41,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:41,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:41,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:41,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-17 22:15:41,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-17 22:15:41,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:41,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-17 22:15:41,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:41,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-17 22:15:41,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:41,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-17 22:15:41,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(345): Moving region 2b34f0020745232a8a57d9007f0d3248 to RSGroup default 2023-07-17 22:15:41,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=2b34f0020745232a8a57d9007f0d3248, REOPEN/MOVE 2023-07-17 22:15:41,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-17 22:15:41,537 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=2b34f0020745232a8a57d9007f0d3248, REOPEN/MOVE 2023-07-17 22:15:41,538 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=2b34f0020745232a8a57d9007f0d3248, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:41,538 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689632141538"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632141538"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632141538"}]},"ts":"1689632141538"} 2023-07-17 22:15:41,539 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure 2b34f0020745232a8a57d9007f0d3248, server=jenkins-hbase4.apache.org,34803,1689632122825}] 2023-07-17 22:15:41,692 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:41,693 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2b34f0020745232a8a57d9007f0d3248, disabling compactions & flushes 2023-07-17 22:15:41,693 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:41,693 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:41,693 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. after waiting 0 ms 2023-07-17 22:15:41,693 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:41,697 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/testRename/2b34f0020745232a8a57d9007f0d3248/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 22:15:41,698 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:41,698 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2b34f0020745232a8a57d9007f0d3248: 2023-07-17 22:15:41,698 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2b34f0020745232a8a57d9007f0d3248 move to jenkins-hbase4.apache.org,41625,1689632118141 record at close sequenceid=5 2023-07-17 22:15:41,699 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:41,700 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=2b34f0020745232a8a57d9007f0d3248, regionState=CLOSED 2023-07-17 22:15:41,700 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689632141700"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632141700"}]},"ts":"1689632141700"} 2023-07-17 22:15:41,702 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-17 22:15:41,702 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure 2b34f0020745232a8a57d9007f0d3248, server=jenkins-hbase4.apache.org,34803,1689632122825 in 162 msec 2023-07-17 22:15:41,703 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=2b34f0020745232a8a57d9007f0d3248, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41625,1689632118141; forceNewPlan=false, retain=false 2023-07-17 22:15:41,853 INFO [jenkins-hbase4:43315] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 22:15:41,854 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=2b34f0020745232a8a57d9007f0d3248, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:41,854 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689632141853"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632141853"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632141853"}]},"ts":"1689632141853"} 2023-07-17 22:15:41,855 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure 2b34f0020745232a8a57d9007f0d3248, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:42,014 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:42,014 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2b34f0020745232a8a57d9007f0d3248, NAME => 'testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:42,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:42,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:42,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:42,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:42,017 INFO [StoreOpener-2b34f0020745232a8a57d9007f0d3248-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:42,018 DEBUG [StoreOpener-2b34f0020745232a8a57d9007f0d3248-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/testRename/2b34f0020745232a8a57d9007f0d3248/tr 2023-07-17 22:15:42,018 DEBUG [StoreOpener-2b34f0020745232a8a57d9007f0d3248-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/testRename/2b34f0020745232a8a57d9007f0d3248/tr 2023-07-17 22:15:42,019 INFO [StoreOpener-2b34f0020745232a8a57d9007f0d3248-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2b34f0020745232a8a57d9007f0d3248 columnFamilyName tr 2023-07-17 22:15:42,020 INFO [StoreOpener-2b34f0020745232a8a57d9007f0d3248-1] regionserver.HStore(310): Store=2b34f0020745232a8a57d9007f0d3248/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:42,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/testRename/2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:42,022 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/testRename/2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:42,026 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:42,028 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2b34f0020745232a8a57d9007f0d3248; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11794800640, jitterRate=0.09847640991210938}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:42,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2b34f0020745232a8a57d9007f0d3248: 2023-07-17 22:15:42,029 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248., pid=131, masterSystemTime=1689632142007 2023-07-17 22:15:42,031 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:42,031 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:42,031 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=2b34f0020745232a8a57d9007f0d3248, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:42,032 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689632142031"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632142031"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632142031"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632142031"}]},"ts":"1689632142031"} 2023-07-17 22:15:42,035 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-17 22:15:42,035 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure 2b34f0020745232a8a57d9007f0d3248, server=jenkins-hbase4.apache.org,41625,1689632118141 in 178 msec 2023-07-17 22:15:42,037 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=2b34f0020745232a8a57d9007f0d3248, REOPEN/MOVE in 500 msec 2023-07-17 22:15:42,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-17 22:15:42,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-17 22:15:42,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:42,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:34647] to rsgroup default 2023-07-17 22:15:42,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:42,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-17 22:15:42,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:42,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-17 22:15:42,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34647,1689632118064, jenkins-hbase4.apache.org,34803,1689632122825] are moved back to newgroup 2023-07-17 22:15:42,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-17 22:15:42,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:42,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-17 22:15:42,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:42,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:42,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:42,554 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:42,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:42,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:42,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:42,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:42,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:42,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:42,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:42,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43315] to rsgroup master 2023-07-17 22:15:42,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:42,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 760 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58158 deadline: 1689633342569, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. 2023-07-17 22:15:42,570 WARN [Listener at localhost/37695] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:42,572 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:42,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:42,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:42,573 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:41625, jenkins-hbase4.apache.org:42021], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:42,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:42,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:42,591 INFO [Listener at localhost/37695] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=505 (was 512), OpenFileDescriptor=767 (was 787), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=417 (was 349) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=2970 (was 2973) 2023-07-17 22:15:42,591 WARN [Listener at localhost/37695] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-17 22:15:42,609 INFO [Listener at localhost/37695] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=504, OpenFileDescriptor=767, MaxFileDescriptor=60000, SystemLoadAverage=417, ProcessCount=172, AvailableMemoryMB=2970 2023-07-17 22:15:42,609 WARN [Listener at localhost/37695] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-17 22:15:42,609 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-17 22:15:42,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:42,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:42,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:42,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:42,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:42,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:42,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:42,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:42,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:42,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:42,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:42,624 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:42,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:42,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:42,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:42,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:42,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:42,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:42,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:42,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43315] to rsgroup master 2023-07-17 22:15:42,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:42,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 788 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58158 deadline: 1689633342634, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. 2023-07-17 22:15:42,635 WARN [Listener at localhost/37695] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:42,636 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:42,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:42,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:42,637 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:41625, jenkins-hbase4.apache.org:42021], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:42,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:42,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:42,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-17 22:15:42,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 22:15:42,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-17 22:15:42,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-17 22:15:42,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-17 22:15:42,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:42,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-17 22:15:42,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:42,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 800 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:58158 deadline: 1689633342646, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-17 22:15:42,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-17 22:15:42,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:42,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 803 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:58158 deadline: 1689633342648, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-17 22:15:42,657 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-17 22:15:42,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-17 22:15:42,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-17 22:15:42,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:42,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 807 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:58158 deadline: 1689633342661, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-17 22:15:42,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:42,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:42,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:42,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:42,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:42,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:42,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:42,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:42,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:42,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:42,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:42,677 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:42,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:42,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:42,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:42,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:42,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:42,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:42,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:42,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43315] to rsgroup master 2023-07-17 22:15:42,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:42,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 831 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58158 deadline: 1689633342687, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. 2023-07-17 22:15:42,690 WARN [Listener at localhost/37695] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:42,692 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:42,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:42,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:42,693 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:41625, jenkins-hbase4.apache.org:42021], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:42,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:42,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:42,713 INFO [Listener at localhost/37695] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=508 (was 504) Potentially hanging thread: hconnection-0x63551a-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x627f7cb-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63551a-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x627f7cb-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=767 (was 767), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=417 (was 417), ProcessCount=172 (was 172), AvailableMemoryMB=2971 (was 2970) - AvailableMemoryMB LEAK? - 2023-07-17 22:15:42,714 WARN [Listener at localhost/37695] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-17 22:15:42,731 INFO [Listener at localhost/37695] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=508, OpenFileDescriptor=767, MaxFileDescriptor=60000, SystemLoadAverage=417, ProcessCount=172, AvailableMemoryMB=2971 2023-07-17 22:15:42,731 WARN [Listener at localhost/37695] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-17 22:15:42,731 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-17 22:15:42,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:42,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:42,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:42,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:42,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:42,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:42,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:42,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:42,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:42,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:42,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:42,747 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:42,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:42,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:42,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:42,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:42,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:42,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:42,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:42,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43315] to rsgroup master 2023-07-17 22:15:42,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:42,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 859 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58158 deadline: 1689633342757, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. 2023-07-17 22:15:42,758 WARN [Listener at localhost/37695] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:42,760 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:42,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:42,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:42,761 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:41625, jenkins-hbase4.apache.org:42021], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:42,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:42,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:42,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:42,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:42,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_842828489 2023-07-17 22:15:42,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:42,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_842828489 2023-07-17 22:15:42,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:42,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:42,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:42,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:42,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:42,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:34647] to rsgroup Group_testDisabledTableMove_842828489 2023-07-17 22:15:42,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:42,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_842828489 2023-07-17 22:15:42,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:42,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:42,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-17 22:15:42,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34647,1689632118064, jenkins-hbase4.apache.org,34803,1689632122825] are moved back to default 2023-07-17 22:15:42,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_842828489 2023-07-17 22:15:42,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:42,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:42,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:42,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_842828489 2023-07-17 22:15:42,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:42,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:42,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-17 22:15:42,792 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 22:15:42,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 132 2023-07-17 22:15:42,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-17 22:15:42,794 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:42,795 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_842828489 2023-07-17 22:15:42,795 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:42,795 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:42,797 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 22:15:42,801 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/76d9adea794b22a56d324899cce31a3f 2023-07-17 22:15:42,801 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/5aa4f480d7ec94de6eeabdb0628d12fc 2023-07-17 22:15:42,801 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/6f99bd881506b0172fdf19e272e1bc76 2023-07-17 22:15:42,801 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/63fc46da99bbd373e9bd4716f72efa50 2023-07-17 22:15:42,801 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/69f4d14780aa52a58e15a04f8dfc3799 2023-07-17 22:15:42,802 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/6f99bd881506b0172fdf19e272e1bc76 empty. 2023-07-17 22:15:42,802 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/69f4d14780aa52a58e15a04f8dfc3799 empty. 2023-07-17 22:15:42,802 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/5aa4f480d7ec94de6eeabdb0628d12fc empty. 2023-07-17 22:15:42,802 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/76d9adea794b22a56d324899cce31a3f empty. 2023-07-17 22:15:42,802 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/63fc46da99bbd373e9bd4716f72efa50 empty. 2023-07-17 22:15:42,802 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/6f99bd881506b0172fdf19e272e1bc76 2023-07-17 22:15:42,802 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/5aa4f480d7ec94de6eeabdb0628d12fc 2023-07-17 22:15:42,802 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/69f4d14780aa52a58e15a04f8dfc3799 2023-07-17 22:15:42,803 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/63fc46da99bbd373e9bd4716f72efa50 2023-07-17 22:15:42,803 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/76d9adea794b22a56d324899cce31a3f 2023-07-17 22:15:42,803 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-17 22:15:42,817 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:42,819 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 63fc46da99bbd373e9bd4716f72efa50, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:42,819 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 76d9adea794b22a56d324899cce31a3f, NAME => 'Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:42,819 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 69f4d14780aa52a58e15a04f8dfc3799, NAME => 'Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:42,844 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:42,844 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 63fc46da99bbd373e9bd4716f72efa50, disabling compactions & flushes 2023-07-17 22:15:42,844 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50. 2023-07-17 22:15:42,844 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50. 2023-07-17 22:15:42,844 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50. after waiting 0 ms 2023-07-17 22:15:42,844 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50. 2023-07-17 22:15:42,844 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50. 2023-07-17 22:15:42,844 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 63fc46da99bbd373e9bd4716f72efa50: 2023-07-17 22:15:42,845 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 5aa4f480d7ec94de6eeabdb0628d12fc, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:42,845 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:42,845 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 69f4d14780aa52a58e15a04f8dfc3799, disabling compactions & flushes 2023-07-17 22:15:42,845 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799. 2023-07-17 22:15:42,845 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799. 2023-07-17 22:15:42,845 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799. after waiting 0 ms 2023-07-17 22:15:42,845 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799. 2023-07-17 22:15:42,845 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799. 2023-07-17 22:15:42,845 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 69f4d14780aa52a58e15a04f8dfc3799: 2023-07-17 22:15:42,846 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 6f99bd881506b0172fdf19e272e1bc76, NAME => 'Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp 2023-07-17 22:15:42,847 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:42,847 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 76d9adea794b22a56d324899cce31a3f, disabling compactions & flushes 2023-07-17 22:15:42,847 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f. 2023-07-17 22:15:42,847 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f. 2023-07-17 22:15:42,847 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f. after waiting 0 ms 2023-07-17 22:15:42,847 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f. 2023-07-17 22:15:42,847 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f. 2023-07-17 22:15:42,847 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 76d9adea794b22a56d324899cce31a3f: 2023-07-17 22:15:42,860 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:42,860 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 6f99bd881506b0172fdf19e272e1bc76, disabling compactions & flushes 2023-07-17 22:15:42,860 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76. 2023-07-17 22:15:42,860 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76. 2023-07-17 22:15:42,860 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76. after waiting 0 ms 2023-07-17 22:15:42,860 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76. 2023-07-17 22:15:42,860 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76. 2023-07-17 22:15:42,860 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 6f99bd881506b0172fdf19e272e1bc76: 2023-07-17 22:15:42,864 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:42,864 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 5aa4f480d7ec94de6eeabdb0628d12fc, disabling compactions & flushes 2023-07-17 22:15:42,864 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc. 2023-07-17 22:15:42,864 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc. 2023-07-17 22:15:42,864 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc. after waiting 0 ms 2023-07-17 22:15:42,864 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc. 2023-07-17 22:15:42,864 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc. 2023-07-17 22:15:42,864 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 5aa4f480d7ec94de6eeabdb0628d12fc: 2023-07-17 22:15:42,866 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 22:15:42,867 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689632142867"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632142867"}]},"ts":"1689632142867"} 2023-07-17 22:15:42,867 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689632142867"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632142867"}]},"ts":"1689632142867"} 2023-07-17 22:15:42,867 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689632142867"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632142867"}]},"ts":"1689632142867"} 2023-07-17 22:15:42,868 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689632142867"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632142867"}]},"ts":"1689632142867"} 2023-07-17 22:15:42,868 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689632142867"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632142867"}]},"ts":"1689632142867"} 2023-07-17 22:15:42,870 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-17 22:15:42,875 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 22:15:42,875 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632142875"}]},"ts":"1689632142875"} 2023-07-17 22:15:42,877 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-17 22:15:42,884 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:42,884 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:42,884 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:42,884 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:42,884 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=76d9adea794b22a56d324899cce31a3f, ASSIGN}, {pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=69f4d14780aa52a58e15a04f8dfc3799, ASSIGN}, {pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=63fc46da99bbd373e9bd4716f72efa50, ASSIGN}, {pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5aa4f480d7ec94de6eeabdb0628d12fc, ASSIGN}, {pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6f99bd881506b0172fdf19e272e1bc76, ASSIGN}] 2023-07-17 22:15:42,887 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=69f4d14780aa52a58e15a04f8dfc3799, ASSIGN 2023-07-17 22:15:42,887 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6f99bd881506b0172fdf19e272e1bc76, ASSIGN 2023-07-17 22:15:42,887 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5aa4f480d7ec94de6eeabdb0628d12fc, ASSIGN 2023-07-17 22:15:42,887 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=76d9adea794b22a56d324899cce31a3f, ASSIGN 2023-07-17 22:15:42,888 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=63fc46da99bbd373e9bd4716f72efa50, ASSIGN 2023-07-17 22:15:42,889 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6f99bd881506b0172fdf19e272e1bc76, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42021,1689632117931; forceNewPlan=false, retain=false 2023-07-17 22:15:42,889 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=76d9adea794b22a56d324899cce31a3f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41625,1689632118141; forceNewPlan=false, retain=false 2023-07-17 22:15:42,889 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=69f4d14780aa52a58e15a04f8dfc3799, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42021,1689632117931; forceNewPlan=false, retain=false 2023-07-17 22:15:42,889 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5aa4f480d7ec94de6eeabdb0628d12fc, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41625,1689632118141; forceNewPlan=false, retain=false 2023-07-17 22:15:42,890 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=63fc46da99bbd373e9bd4716f72efa50, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41625,1689632118141; forceNewPlan=false, retain=false 2023-07-17 22:15:42,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-17 22:15:43,039 INFO [jenkins-hbase4:43315] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-17 22:15:43,043 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=6f99bd881506b0172fdf19e272e1bc76, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:43,043 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=5aa4f480d7ec94de6eeabdb0628d12fc, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:43,043 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689632143043"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632143043"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632143043"}]},"ts":"1689632143043"} 2023-07-17 22:15:43,043 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=76d9adea794b22a56d324899cce31a3f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:43,043 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=69f4d14780aa52a58e15a04f8dfc3799, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:43,043 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=63fc46da99bbd373e9bd4716f72efa50, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:43,043 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689632143043"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632143043"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632143043"}]},"ts":"1689632143043"} 2023-07-17 22:15:43,043 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689632143043"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632143043"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632143043"}]},"ts":"1689632143043"} 2023-07-17 22:15:43,043 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689632143043"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632143043"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632143043"}]},"ts":"1689632143043"} 2023-07-17 22:15:43,043 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689632143043"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632143043"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632143043"}]},"ts":"1689632143043"} 2023-07-17 22:15:43,045 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=137, state=RUNNABLE; OpenRegionProcedure 6f99bd881506b0172fdf19e272e1bc76, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:43,045 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=133, state=RUNNABLE; OpenRegionProcedure 76d9adea794b22a56d324899cce31a3f, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:43,046 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=135, state=RUNNABLE; OpenRegionProcedure 63fc46da99bbd373e9bd4716f72efa50, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:43,047 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=136, state=RUNNABLE; OpenRegionProcedure 5aa4f480d7ec94de6eeabdb0628d12fc, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:43,048 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=134, state=RUNNABLE; OpenRegionProcedure 69f4d14780aa52a58e15a04f8dfc3799, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:43,084 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-17 22:15:43,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-17 22:15:43,200 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799. 2023-07-17 22:15:43,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 69f4d14780aa52a58e15a04f8dfc3799, NAME => 'Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-17 22:15:43,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 69f4d14780aa52a58e15a04f8dfc3799 2023-07-17 22:15:43,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:43,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 69f4d14780aa52a58e15a04f8dfc3799 2023-07-17 22:15:43,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 69f4d14780aa52a58e15a04f8dfc3799 2023-07-17 22:15:43,202 INFO [StoreOpener-69f4d14780aa52a58e15a04f8dfc3799-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 69f4d14780aa52a58e15a04f8dfc3799 2023-07-17 22:15:43,204 DEBUG [StoreOpener-69f4d14780aa52a58e15a04f8dfc3799-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/69f4d14780aa52a58e15a04f8dfc3799/f 2023-07-17 22:15:43,204 DEBUG [StoreOpener-69f4d14780aa52a58e15a04f8dfc3799-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/69f4d14780aa52a58e15a04f8dfc3799/f 2023-07-17 22:15:43,204 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f. 2023-07-17 22:15:43,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 76d9adea794b22a56d324899cce31a3f, NAME => 'Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-17 22:15:43,205 INFO [StoreOpener-69f4d14780aa52a58e15a04f8dfc3799-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 69f4d14780aa52a58e15a04f8dfc3799 columnFamilyName f 2023-07-17 22:15:43,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 76d9adea794b22a56d324899cce31a3f 2023-07-17 22:15:43,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:43,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 76d9adea794b22a56d324899cce31a3f 2023-07-17 22:15:43,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 76d9adea794b22a56d324899cce31a3f 2023-07-17 22:15:43,205 INFO [StoreOpener-69f4d14780aa52a58e15a04f8dfc3799-1] regionserver.HStore(310): Store=69f4d14780aa52a58e15a04f8dfc3799/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:43,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/69f4d14780aa52a58e15a04f8dfc3799 2023-07-17 22:15:43,206 INFO [StoreOpener-76d9adea794b22a56d324899cce31a3f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 76d9adea794b22a56d324899cce31a3f 2023-07-17 22:15:43,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/69f4d14780aa52a58e15a04f8dfc3799 2023-07-17 22:15:43,207 DEBUG [StoreOpener-76d9adea794b22a56d324899cce31a3f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/76d9adea794b22a56d324899cce31a3f/f 2023-07-17 22:15:43,207 DEBUG [StoreOpener-76d9adea794b22a56d324899cce31a3f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/76d9adea794b22a56d324899cce31a3f/f 2023-07-17 22:15:43,208 INFO [StoreOpener-76d9adea794b22a56d324899cce31a3f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 76d9adea794b22a56d324899cce31a3f columnFamilyName f 2023-07-17 22:15:43,208 INFO [StoreOpener-76d9adea794b22a56d324899cce31a3f-1] regionserver.HStore(310): Store=76d9adea794b22a56d324899cce31a3f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:43,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/76d9adea794b22a56d324899cce31a3f 2023-07-17 22:15:43,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 69f4d14780aa52a58e15a04f8dfc3799 2023-07-17 22:15:43,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/76d9adea794b22a56d324899cce31a3f 2023-07-17 22:15:43,211 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/69f4d14780aa52a58e15a04f8dfc3799/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:43,212 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 69f4d14780aa52a58e15a04f8dfc3799; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10513358880, jitterRate=-0.020867154002189636}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:43,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 69f4d14780aa52a58e15a04f8dfc3799: 2023-07-17 22:15:43,212 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799., pid=142, masterSystemTime=1689632143197 2023-07-17 22:15:43,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 76d9adea794b22a56d324899cce31a3f 2023-07-17 22:15:43,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799. 2023-07-17 22:15:43,214 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799. 2023-07-17 22:15:43,214 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76. 2023-07-17 22:15:43,214 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=69f4d14780aa52a58e15a04f8dfc3799, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:43,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6f99bd881506b0172fdf19e272e1bc76, NAME => 'Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-17 22:15:43,214 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689632143214"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632143214"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632143214"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632143214"}]},"ts":"1689632143214"} 2023-07-17 22:15:43,215 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 6f99bd881506b0172fdf19e272e1bc76 2023-07-17 22:15:43,215 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:43,215 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6f99bd881506b0172fdf19e272e1bc76 2023-07-17 22:15:43,215 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6f99bd881506b0172fdf19e272e1bc76 2023-07-17 22:15:43,215 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/76d9adea794b22a56d324899cce31a3f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:43,216 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 76d9adea794b22a56d324899cce31a3f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10411188640, jitterRate=-0.03038249909877777}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:43,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 76d9adea794b22a56d324899cce31a3f: 2023-07-17 22:15:43,216 INFO [StoreOpener-6f99bd881506b0172fdf19e272e1bc76-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6f99bd881506b0172fdf19e272e1bc76 2023-07-17 22:15:43,216 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f., pid=139, masterSystemTime=1689632143201 2023-07-17 22:15:43,217 DEBUG [StoreOpener-6f99bd881506b0172fdf19e272e1bc76-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/6f99bd881506b0172fdf19e272e1bc76/f 2023-07-17 22:15:43,217 DEBUG [StoreOpener-6f99bd881506b0172fdf19e272e1bc76-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/6f99bd881506b0172fdf19e272e1bc76/f 2023-07-17 22:15:43,217 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f. 2023-07-17 22:15:43,218 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f. 2023-07-17 22:15:43,218 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50. 2023-07-17 22:15:43,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 63fc46da99bbd373e9bd4716f72efa50, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-17 22:15:43,218 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=134 2023-07-17 22:15:43,218 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=76d9adea794b22a56d324899cce31a3f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:43,218 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=134, state=SUCCESS; OpenRegionProcedure 69f4d14780aa52a58e15a04f8dfc3799, server=jenkins-hbase4.apache.org,42021,1689632117931 in 168 msec 2023-07-17 22:15:43,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 63fc46da99bbd373e9bd4716f72efa50 2023-07-17 22:15:43,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:43,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 63fc46da99bbd373e9bd4716f72efa50 2023-07-17 22:15:43,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 63fc46da99bbd373e9bd4716f72efa50 2023-07-17 22:15:43,218 INFO [StoreOpener-6f99bd881506b0172fdf19e272e1bc76-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6f99bd881506b0172fdf19e272e1bc76 columnFamilyName f 2023-07-17 22:15:43,218 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689632143218"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632143218"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632143218"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632143218"}]},"ts":"1689632143218"} 2023-07-17 22:15:43,219 INFO [StoreOpener-6f99bd881506b0172fdf19e272e1bc76-1] regionserver.HStore(310): Store=6f99bd881506b0172fdf19e272e1bc76/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:43,220 INFO [StoreOpener-63fc46da99bbd373e9bd4716f72efa50-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 63fc46da99bbd373e9bd4716f72efa50 2023-07-17 22:15:43,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/6f99bd881506b0172fdf19e272e1bc76 2023-07-17 22:15:43,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/6f99bd881506b0172fdf19e272e1bc76 2023-07-17 22:15:43,220 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=69f4d14780aa52a58e15a04f8dfc3799, ASSIGN in 334 msec 2023-07-17 22:15:43,221 DEBUG [StoreOpener-63fc46da99bbd373e9bd4716f72efa50-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/63fc46da99bbd373e9bd4716f72efa50/f 2023-07-17 22:15:43,221 DEBUG [StoreOpener-63fc46da99bbd373e9bd4716f72efa50-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/63fc46da99bbd373e9bd4716f72efa50/f 2023-07-17 22:15:43,222 INFO [StoreOpener-63fc46da99bbd373e9bd4716f72efa50-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 63fc46da99bbd373e9bd4716f72efa50 columnFamilyName f 2023-07-17 22:15:43,222 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=133 2023-07-17 22:15:43,222 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=133, state=SUCCESS; OpenRegionProcedure 76d9adea794b22a56d324899cce31a3f, server=jenkins-hbase4.apache.org,41625,1689632118141 in 175 msec 2023-07-17 22:15:43,222 INFO [StoreOpener-63fc46da99bbd373e9bd4716f72efa50-1] regionserver.HStore(310): Store=63fc46da99bbd373e9bd4716f72efa50/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:43,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/63fc46da99bbd373e9bd4716f72efa50 2023-07-17 22:15:43,223 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=76d9adea794b22a56d324899cce31a3f, ASSIGN in 338 msec 2023-07-17 22:15:43,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/63fc46da99bbd373e9bd4716f72efa50 2023-07-17 22:15:43,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6f99bd881506b0172fdf19e272e1bc76 2023-07-17 22:15:43,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 63fc46da99bbd373e9bd4716f72efa50 2023-07-17 22:15:43,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/6f99bd881506b0172fdf19e272e1bc76/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:43,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/63fc46da99bbd373e9bd4716f72efa50/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:43,228 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6f99bd881506b0172fdf19e272e1bc76; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11756018240, jitterRate=0.09486451745033264}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:43,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6f99bd881506b0172fdf19e272e1bc76: 2023-07-17 22:15:43,228 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 63fc46da99bbd373e9bd4716f72efa50; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10912858880, jitterRate=0.01633918285369873}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:43,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 63fc46da99bbd373e9bd4716f72efa50: 2023-07-17 22:15:43,229 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76., pid=138, masterSystemTime=1689632143197 2023-07-17 22:15:43,229 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50., pid=140, masterSystemTime=1689632143201 2023-07-17 22:15:43,230 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76. 2023-07-17 22:15:43,230 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76. 2023-07-17 22:15:43,235 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=6f99bd881506b0172fdf19e272e1bc76, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:43,235 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689632143235"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632143235"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632143235"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632143235"}]},"ts":"1689632143235"} 2023-07-17 22:15:43,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50. 2023-07-17 22:15:43,235 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50. 2023-07-17 22:15:43,235 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc. 2023-07-17 22:15:43,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5aa4f480d7ec94de6eeabdb0628d12fc, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-17 22:15:43,236 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=63fc46da99bbd373e9bd4716f72efa50, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:43,236 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689632143236"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632143236"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632143236"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632143236"}]},"ts":"1689632143236"} 2023-07-17 22:15:43,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 5aa4f480d7ec94de6eeabdb0628d12fc 2023-07-17 22:15:43,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:43,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5aa4f480d7ec94de6eeabdb0628d12fc 2023-07-17 22:15:43,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5aa4f480d7ec94de6eeabdb0628d12fc 2023-07-17 22:15:43,238 INFO [StoreOpener-5aa4f480d7ec94de6eeabdb0628d12fc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5aa4f480d7ec94de6eeabdb0628d12fc 2023-07-17 22:15:43,238 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=137 2023-07-17 22:15:43,238 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=137, state=SUCCESS; OpenRegionProcedure 6f99bd881506b0172fdf19e272e1bc76, server=jenkins-hbase4.apache.org,42021,1689632117931 in 192 msec 2023-07-17 22:15:43,239 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=135 2023-07-17 22:15:43,239 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=135, state=SUCCESS; OpenRegionProcedure 63fc46da99bbd373e9bd4716f72efa50, server=jenkins-hbase4.apache.org,41625,1689632118141 in 191 msec 2023-07-17 22:15:43,239 DEBUG [StoreOpener-5aa4f480d7ec94de6eeabdb0628d12fc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/5aa4f480d7ec94de6eeabdb0628d12fc/f 2023-07-17 22:15:43,239 DEBUG [StoreOpener-5aa4f480d7ec94de6eeabdb0628d12fc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/5aa4f480d7ec94de6eeabdb0628d12fc/f 2023-07-17 22:15:43,239 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6f99bd881506b0172fdf19e272e1bc76, ASSIGN in 354 msec 2023-07-17 22:15:43,240 INFO [StoreOpener-5aa4f480d7ec94de6eeabdb0628d12fc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5aa4f480d7ec94de6eeabdb0628d12fc columnFamilyName f 2023-07-17 22:15:43,240 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=63fc46da99bbd373e9bd4716f72efa50, ASSIGN in 355 msec 2023-07-17 22:15:43,240 INFO [StoreOpener-5aa4f480d7ec94de6eeabdb0628d12fc-1] regionserver.HStore(310): Store=5aa4f480d7ec94de6eeabdb0628d12fc/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:43,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/5aa4f480d7ec94de6eeabdb0628d12fc 2023-07-17 22:15:43,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/5aa4f480d7ec94de6eeabdb0628d12fc 2023-07-17 22:15:43,243 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5aa4f480d7ec94de6eeabdb0628d12fc 2023-07-17 22:15:43,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/5aa4f480d7ec94de6eeabdb0628d12fc/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:43,245 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5aa4f480d7ec94de6eeabdb0628d12fc; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11975826880, jitterRate=0.11533579230308533}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:43,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5aa4f480d7ec94de6eeabdb0628d12fc: 2023-07-17 22:15:43,246 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc., pid=141, masterSystemTime=1689632143201 2023-07-17 22:15:43,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc. 2023-07-17 22:15:43,247 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc. 2023-07-17 22:15:43,247 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=5aa4f480d7ec94de6eeabdb0628d12fc, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:43,247 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689632143247"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632143247"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632143247"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632143247"}]},"ts":"1689632143247"} 2023-07-17 22:15:43,250 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=136 2023-07-17 22:15:43,250 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=136, state=SUCCESS; OpenRegionProcedure 5aa4f480d7ec94de6eeabdb0628d12fc, server=jenkins-hbase4.apache.org,41625,1689632118141 in 202 msec 2023-07-17 22:15:43,251 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=132 2023-07-17 22:15:43,251 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5aa4f480d7ec94de6eeabdb0628d12fc, ASSIGN in 366 msec 2023-07-17 22:15:43,252 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 22:15:43,252 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632143252"}]},"ts":"1689632143252"} 2023-07-17 22:15:43,253 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-17 22:15:43,255 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 22:15:43,256 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 465 msec 2023-07-17 22:15:43,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-17 22:15:43,399 INFO [Listener at localhost/37695] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 132 completed 2023-07-17 22:15:43,400 DEBUG [Listener at localhost/37695] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-17 22:15:43,400 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:43,403 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-17 22:15:43,403 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:43,404 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-17 22:15:43,404 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:43,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-17 22:15:43,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 22:15:43,410 INFO [Listener at localhost/37695] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-17 22:15:43,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-17 22:15:43,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-17 22:15:43,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-17 22:15:43,414 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632143414"}]},"ts":"1689632143414"} 2023-07-17 22:15:43,415 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-17 22:15:43,417 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-17 22:15:43,417 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=76d9adea794b22a56d324899cce31a3f, UNASSIGN}, {pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=69f4d14780aa52a58e15a04f8dfc3799, UNASSIGN}, {pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=63fc46da99bbd373e9bd4716f72efa50, UNASSIGN}, {pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5aa4f480d7ec94de6eeabdb0628d12fc, UNASSIGN}, {pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6f99bd881506b0172fdf19e272e1bc76, UNASSIGN}] 2023-07-17 22:15:43,419 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5aa4f480d7ec94de6eeabdb0628d12fc, UNASSIGN 2023-07-17 22:15:43,419 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=69f4d14780aa52a58e15a04f8dfc3799, UNASSIGN 2023-07-17 22:15:43,419 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=63fc46da99bbd373e9bd4716f72efa50, UNASSIGN 2023-07-17 22:15:43,419 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6f99bd881506b0172fdf19e272e1bc76, UNASSIGN 2023-07-17 22:15:43,419 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=76d9adea794b22a56d324899cce31a3f, UNASSIGN 2023-07-17 22:15:43,420 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=5aa4f480d7ec94de6eeabdb0628d12fc, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:43,420 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=69f4d14780aa52a58e15a04f8dfc3799, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:43,420 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689632143420"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632143420"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632143420"}]},"ts":"1689632143420"} 2023-07-17 22:15:43,420 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689632143420"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632143420"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632143420"}]},"ts":"1689632143420"} 2023-07-17 22:15:43,420 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=63fc46da99bbd373e9bd4716f72efa50, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:43,420 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=76d9adea794b22a56d324899cce31a3f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:43,420 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689632143420"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632143420"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632143420"}]},"ts":"1689632143420"} 2023-07-17 22:15:43,420 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689632143420"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632143420"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632143420"}]},"ts":"1689632143420"} 2023-07-17 22:15:43,420 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=6f99bd881506b0172fdf19e272e1bc76, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:43,421 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689632143420"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632143420"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632143420"}]},"ts":"1689632143420"} 2023-07-17 22:15:43,421 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=147, state=RUNNABLE; CloseRegionProcedure 5aa4f480d7ec94de6eeabdb0628d12fc, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:43,422 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=145, state=RUNNABLE; CloseRegionProcedure 69f4d14780aa52a58e15a04f8dfc3799, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:43,423 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=146, state=RUNNABLE; CloseRegionProcedure 63fc46da99bbd373e9bd4716f72efa50, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:43,424 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=144, state=RUNNABLE; CloseRegionProcedure 76d9adea794b22a56d324899cce31a3f, server=jenkins-hbase4.apache.org,41625,1689632118141}] 2023-07-17 22:15:43,425 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=148, state=RUNNABLE; CloseRegionProcedure 6f99bd881506b0172fdf19e272e1bc76, server=jenkins-hbase4.apache.org,42021,1689632117931}] 2023-07-17 22:15:43,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-17 22:15:43,574 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 69f4d14780aa52a58e15a04f8dfc3799 2023-07-17 22:15:43,574 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 63fc46da99bbd373e9bd4716f72efa50 2023-07-17 22:15:43,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 63fc46da99bbd373e9bd4716f72efa50, disabling compactions & flushes 2023-07-17 22:15:43,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 69f4d14780aa52a58e15a04f8dfc3799, disabling compactions & flushes 2023-07-17 22:15:43,575 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50. 2023-07-17 22:15:43,575 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799. 2023-07-17 22:15:43,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799. 2023-07-17 22:15:43,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50. 2023-07-17 22:15:43,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799. after waiting 0 ms 2023-07-17 22:15:43,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799. 2023-07-17 22:15:43,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50. after waiting 0 ms 2023-07-17 22:15:43,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50. 2023-07-17 22:15:43,579 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/69f4d14780aa52a58e15a04f8dfc3799/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:43,579 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/63fc46da99bbd373e9bd4716f72efa50/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:43,580 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799. 2023-07-17 22:15:43,580 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 69f4d14780aa52a58e15a04f8dfc3799: 2023-07-17 22:15:43,580 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50. 2023-07-17 22:15:43,580 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 63fc46da99bbd373e9bd4716f72efa50: 2023-07-17 22:15:43,581 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 69f4d14780aa52a58e15a04f8dfc3799 2023-07-17 22:15:43,581 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6f99bd881506b0172fdf19e272e1bc76 2023-07-17 22:15:43,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6f99bd881506b0172fdf19e272e1bc76, disabling compactions & flushes 2023-07-17 22:15:43,582 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76. 2023-07-17 22:15:43,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76. 2023-07-17 22:15:43,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76. after waiting 0 ms 2023-07-17 22:15:43,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76. 2023-07-17 22:15:43,582 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=69f4d14780aa52a58e15a04f8dfc3799, regionState=CLOSED 2023-07-17 22:15:43,582 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689632143582"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632143582"}]},"ts":"1689632143582"} 2023-07-17 22:15:43,582 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 63fc46da99bbd373e9bd4716f72efa50 2023-07-17 22:15:43,583 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5aa4f480d7ec94de6eeabdb0628d12fc 2023-07-17 22:15:43,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5aa4f480d7ec94de6eeabdb0628d12fc, disabling compactions & flushes 2023-07-17 22:15:43,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc. 2023-07-17 22:15:43,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc. 2023-07-17 22:15:43,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc. after waiting 0 ms 2023-07-17 22:15:43,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc. 2023-07-17 22:15:43,584 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=63fc46da99bbd373e9bd4716f72efa50, regionState=CLOSED 2023-07-17 22:15:43,584 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689632143584"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632143584"}]},"ts":"1689632143584"} 2023-07-17 22:15:43,587 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=145 2023-07-17 22:15:43,587 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=146 2023-07-17 22:15:43,587 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=145, state=SUCCESS; CloseRegionProcedure 69f4d14780aa52a58e15a04f8dfc3799, server=jenkins-hbase4.apache.org,42021,1689632117931 in 163 msec 2023-07-17 22:15:43,587 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=146, state=SUCCESS; CloseRegionProcedure 63fc46da99bbd373e9bd4716f72efa50, server=jenkins-hbase4.apache.org,41625,1689632118141 in 162 msec 2023-07-17 22:15:43,588 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=63fc46da99bbd373e9bd4716f72efa50, UNASSIGN in 170 msec 2023-07-17 22:15:43,588 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=69f4d14780aa52a58e15a04f8dfc3799, UNASSIGN in 170 msec 2023-07-17 22:15:43,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/6f99bd881506b0172fdf19e272e1bc76/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:43,593 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76. 2023-07-17 22:15:43,593 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6f99bd881506b0172fdf19e272e1bc76: 2023-07-17 22:15:43,593 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/5aa4f480d7ec94de6eeabdb0628d12fc/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:43,594 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc. 2023-07-17 22:15:43,594 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5aa4f480d7ec94de6eeabdb0628d12fc: 2023-07-17 22:15:43,594 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6f99bd881506b0172fdf19e272e1bc76 2023-07-17 22:15:43,594 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=6f99bd881506b0172fdf19e272e1bc76, regionState=CLOSED 2023-07-17 22:15:43,595 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689632143594"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632143594"}]},"ts":"1689632143594"} 2023-07-17 22:15:43,595 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5aa4f480d7ec94de6eeabdb0628d12fc 2023-07-17 22:15:43,595 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 76d9adea794b22a56d324899cce31a3f 2023-07-17 22:15:43,595 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=5aa4f480d7ec94de6eeabdb0628d12fc, regionState=CLOSED 2023-07-17 22:15:43,596 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 76d9adea794b22a56d324899cce31a3f, disabling compactions & flushes 2023-07-17 22:15:43,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f. 2023-07-17 22:15:43,596 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689632143595"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632143595"}]},"ts":"1689632143595"} 2023-07-17 22:15:43,596 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f. 2023-07-17 22:15:43,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f. after waiting 0 ms 2023-07-17 22:15:43,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f. 2023-07-17 22:15:43,598 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=148 2023-07-17 22:15:43,598 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=148, state=SUCCESS; CloseRegionProcedure 6f99bd881506b0172fdf19e272e1bc76, server=jenkins-hbase4.apache.org,42021,1689632117931 in 171 msec 2023-07-17 22:15:43,599 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6f99bd881506b0172fdf19e272e1bc76, UNASSIGN in 181 msec 2023-07-17 22:15:43,600 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=147 2023-07-17 22:15:43,600 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=147, state=SUCCESS; CloseRegionProcedure 5aa4f480d7ec94de6eeabdb0628d12fc, server=jenkins-hbase4.apache.org,41625,1689632118141 in 177 msec 2023-07-17 22:15:43,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/Group_testDisabledTableMove/76d9adea794b22a56d324899cce31a3f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:43,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f. 2023-07-17 22:15:43,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 76d9adea794b22a56d324899cce31a3f: 2023-07-17 22:15:43,601 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5aa4f480d7ec94de6eeabdb0628d12fc, UNASSIGN in 183 msec 2023-07-17 22:15:43,602 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 76d9adea794b22a56d324899cce31a3f 2023-07-17 22:15:43,602 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=76d9adea794b22a56d324899cce31a3f, regionState=CLOSED 2023-07-17 22:15:43,602 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689632143602"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632143602"}]},"ts":"1689632143602"} 2023-07-17 22:15:43,604 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=144 2023-07-17 22:15:43,604 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=144, state=SUCCESS; CloseRegionProcedure 76d9adea794b22a56d324899cce31a3f, server=jenkins-hbase4.apache.org,41625,1689632118141 in 179 msec 2023-07-17 22:15:43,605 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=143 2023-07-17 22:15:43,605 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=76d9adea794b22a56d324899cce31a3f, UNASSIGN in 187 msec 2023-07-17 22:15:43,606 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632143606"}]},"ts":"1689632143606"} 2023-07-17 22:15:43,607 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-17 22:15:43,609 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-17 22:15:43,610 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 198 msec 2023-07-17 22:15:43,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-17 22:15:43,716 INFO [Listener at localhost/37695] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 143 completed 2023-07-17 22:15:43,716 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_842828489 2023-07-17 22:15:43,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_842828489 2023-07-17 22:15:43,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:43,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_842828489 2023-07-17 22:15:43,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:43,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:43,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-17 22:15:43,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_842828489, current retry=0 2023-07-17 22:15:43,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_842828489. 2023-07-17 22:15:43,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:43,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:43,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:43,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-17 22:15:43,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 22:15:43,728 INFO [Listener at localhost/37695] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-17 22:15:43,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-17 22:15:43,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:43,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 919 service: MasterService methodName: DisableTable size: 89 connection: 172.31.14.131:58158 deadline: 1689632203728, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-17 22:15:43,729 DEBUG [Listener at localhost/37695] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-17 22:15:43,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-17 22:15:43,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] procedure2.ProcedureExecutor(1029): Stored pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-17 22:15:43,732 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-17 22:15:43,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_842828489' 2023-07-17 22:15:43,733 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=155, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-17 22:15:43,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:43,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_842828489 2023-07-17 22:15:43,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:43,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:43,739 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/76d9adea794b22a56d324899cce31a3f 2023-07-17 22:15:43,739 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/63fc46da99bbd373e9bd4716f72efa50 2023-07-17 22:15:43,739 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/6f99bd881506b0172fdf19e272e1bc76 2023-07-17 22:15:43,739 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/5aa4f480d7ec94de6eeabdb0628d12fc 2023-07-17 22:15:43,739 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/69f4d14780aa52a58e15a04f8dfc3799 2023-07-17 22:15:43,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-17 22:15:43,741 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/63fc46da99bbd373e9bd4716f72efa50/f, FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/63fc46da99bbd373e9bd4716f72efa50/recovered.edits] 2023-07-17 22:15:43,741 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/6f99bd881506b0172fdf19e272e1bc76/f, FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/6f99bd881506b0172fdf19e272e1bc76/recovered.edits] 2023-07-17 22:15:43,741 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/76d9adea794b22a56d324899cce31a3f/f, FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/76d9adea794b22a56d324899cce31a3f/recovered.edits] 2023-07-17 22:15:43,742 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/5aa4f480d7ec94de6eeabdb0628d12fc/f, FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/5aa4f480d7ec94de6eeabdb0628d12fc/recovered.edits] 2023-07-17 22:15:43,742 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/69f4d14780aa52a58e15a04f8dfc3799/f, FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/69f4d14780aa52a58e15a04f8dfc3799/recovered.edits] 2023-07-17 22:15:43,750 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/76d9adea794b22a56d324899cce31a3f/recovered.edits/4.seqid to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/archive/data/default/Group_testDisabledTableMove/76d9adea794b22a56d324899cce31a3f/recovered.edits/4.seqid 2023-07-17 22:15:43,751 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/76d9adea794b22a56d324899cce31a3f 2023-07-17 22:15:43,751 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/63fc46da99bbd373e9bd4716f72efa50/recovered.edits/4.seqid to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/archive/data/default/Group_testDisabledTableMove/63fc46da99bbd373e9bd4716f72efa50/recovered.edits/4.seqid 2023-07-17 22:15:43,751 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/6f99bd881506b0172fdf19e272e1bc76/recovered.edits/4.seqid to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/archive/data/default/Group_testDisabledTableMove/6f99bd881506b0172fdf19e272e1bc76/recovered.edits/4.seqid 2023-07-17 22:15:43,751 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/5aa4f480d7ec94de6eeabdb0628d12fc/recovered.edits/4.seqid to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/archive/data/default/Group_testDisabledTableMove/5aa4f480d7ec94de6eeabdb0628d12fc/recovered.edits/4.seqid 2023-07-17 22:15:43,751 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/69f4d14780aa52a58e15a04f8dfc3799/recovered.edits/4.seqid to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/archive/data/default/Group_testDisabledTableMove/69f4d14780aa52a58e15a04f8dfc3799/recovered.edits/4.seqid 2023-07-17 22:15:43,752 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/63fc46da99bbd373e9bd4716f72efa50 2023-07-17 22:15:43,752 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/6f99bd881506b0172fdf19e272e1bc76 2023-07-17 22:15:43,752 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/69f4d14780aa52a58e15a04f8dfc3799 2023-07-17 22:15:43,752 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/.tmp/data/default/Group_testDisabledTableMove/5aa4f480d7ec94de6eeabdb0628d12fc 2023-07-17 22:15:43,753 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-17 22:15:43,755 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=155, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-17 22:15:43,758 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-17 22:15:43,762 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-17 22:15:43,763 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=155, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-17 22:15:43,763 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-17 22:15:43,764 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632143763"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:43,764 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632143763"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:43,764 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632143763"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:43,764 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632143763"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:43,764 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632143763"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:43,766 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-17 22:15:43,766 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 76d9adea794b22a56d324899cce31a3f, NAME => 'Group_testDisabledTableMove,,1689632142789.76d9adea794b22a56d324899cce31a3f.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 69f4d14780aa52a58e15a04f8dfc3799, NAME => 'Group_testDisabledTableMove,aaaaa,1689632142789.69f4d14780aa52a58e15a04f8dfc3799.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 63fc46da99bbd373e9bd4716f72efa50, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689632142789.63fc46da99bbd373e9bd4716f72efa50.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 5aa4f480d7ec94de6eeabdb0628d12fc, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689632142789.5aa4f480d7ec94de6eeabdb0628d12fc.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 6f99bd881506b0172fdf19e272e1bc76, NAME => 'Group_testDisabledTableMove,zzzzz,1689632142789.6f99bd881506b0172fdf19e272e1bc76.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-17 22:15:43,766 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-17 22:15:43,766 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689632143766"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:43,768 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-17 22:15:43,770 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=155, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-17 22:15:43,771 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=155, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 40 msec 2023-07-17 22:15:43,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-17 22:15:43,842 INFO [Listener at localhost/37695] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 155 completed 2023-07-17 22:15:43,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:43,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:43,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:43,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:43,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:43,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:34647] to rsgroup default 2023-07-17 22:15:43,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:43,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_842828489 2023-07-17 22:15:43,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:43,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:43,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_842828489, current retry=0 2023-07-17 22:15:43,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34647,1689632118064, jenkins-hbase4.apache.org,34803,1689632122825] are moved back to Group_testDisabledTableMove_842828489 2023-07-17 22:15:43,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_842828489 => default 2023-07-17 22:15:43,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:43,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_842828489 2023-07-17 22:15:43,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:43,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:43,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-17 22:15:43,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:43,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:43,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:43,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:43,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:43,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:43,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:43,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:43,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:43,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:43,867 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:43,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:43,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:43,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:43,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:43,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:43,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:43,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:43,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43315] to rsgroup master 2023-07-17 22:15:43,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:43,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 953 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58158 deadline: 1689633343877, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. 2023-07-17 22:15:43,877 WARN [Listener at localhost/37695] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:43,879 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:43,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:43,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:43,880 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:41625, jenkins-hbase4.apache.org:42021], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:43,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:43,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:43,898 INFO [Listener at localhost/37695] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=512 (was 508) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2075976686_17 at /127.0.0.1:34236 [Waiting for operation #16] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1897541910_17 at /127.0.0.1:51368 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5b2f1cbe-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63551a-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=781 (was 767) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=417 (was 417), ProcessCount=172 (was 172), AvailableMemoryMB=2981 (was 2971) - AvailableMemoryMB LEAK? - 2023-07-17 22:15:43,898 WARN [Listener at localhost/37695] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-17 22:15:43,914 INFO [Listener at localhost/37695] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=512, OpenFileDescriptor=781, MaxFileDescriptor=60000, SystemLoadAverage=417, ProcessCount=172, AvailableMemoryMB=2980 2023-07-17 22:15:43,914 WARN [Listener at localhost/37695] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-17 22:15:43,914 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-17 22:15:43,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:43,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:43,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:43,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:43,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:43,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:43,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:43,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:43,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:43,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:43,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:43,928 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:43,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:43,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:43,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:43,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:43,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:43,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:43,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:43,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43315] to rsgroup master 2023-07-17 22:15:43,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:43,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] ipc.CallRunner(144): callId: 981 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58158 deadline: 1689633343940, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. 2023-07-17 22:15:43,940 WARN [Listener at localhost/37695] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43315 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:43,942 INFO [Listener at localhost/37695] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:43,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:43,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:43,943 INFO [Listener at localhost/37695] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34647, jenkins-hbase4.apache.org:34803, jenkins-hbase4.apache.org:41625, jenkins-hbase4.apache.org:42021], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:43,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:43,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43315] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:43,944 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-17 22:15:43,944 INFO [Listener at localhost/37695] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-17 22:15:43,944 DEBUG [Listener at localhost/37695] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7f52b832 to 127.0.0.1:57139 2023-07-17 22:15:43,944 DEBUG [Listener at localhost/37695] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:43,945 DEBUG [Listener at localhost/37695] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-17 22:15:43,945 DEBUG [Listener at localhost/37695] util.JVMClusterUtil(257): Found active master hash=1778158393, stopped=false 2023-07-17 22:15:43,945 DEBUG [Listener at localhost/37695] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-17 22:15:43,945 DEBUG [Listener at localhost/37695] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-17 22:15:43,945 INFO [Listener at localhost/37695] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,43315,1689632115843 2023-07-17 22:15:43,947 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:34803-0x101755a8bb7000b, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:43,947 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:43,947 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:43,948 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:34647-0x101755a8bb70002, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:43,947 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:43,947 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:43,948 INFO [Listener at localhost/37695] procedure2.ProcedureExecutor(629): Stopping 2023-07-17 22:15:43,948 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34803-0x101755a8bb7000b, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:43,948 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:43,948 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34647-0x101755a8bb70002, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:43,948 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:43,948 DEBUG [Listener at localhost/37695] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x339e0a8e to 127.0.0.1:57139 2023-07-17 22:15:43,949 DEBUG [Listener at localhost/37695] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:43,949 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:43,949 INFO [Listener at localhost/37695] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42021,1689632117931' ***** 2023-07-17 22:15:43,949 INFO [Listener at localhost/37695] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 22:15:43,949 INFO [Listener at localhost/37695] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34647,1689632118064' ***** 2023-07-17 22:15:43,949 INFO [Listener at localhost/37695] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 22:15:43,949 INFO [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 22:15:43,949 INFO [RS:1;jenkins-hbase4:34647] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 22:15:43,949 INFO [Listener at localhost/37695] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41625,1689632118141' ***** 2023-07-17 22:15:43,950 INFO [Listener at localhost/37695] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 22:15:43,950 INFO [Listener at localhost/37695] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34803,1689632122825' ***** 2023-07-17 22:15:43,950 INFO [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 22:15:43,950 INFO [Listener at localhost/37695] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 22:15:43,952 INFO [RS:3;jenkins-hbase4:34803] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 22:15:43,968 INFO [RS:3;jenkins-hbase4:34803] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@78c1ca58{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:43,968 INFO [RS:2;jenkins-hbase4:41625] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3708da20{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:43,968 INFO [RS:0;jenkins-hbase4:42021] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3900982e{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:43,968 INFO [RS:1;jenkins-hbase4:34647] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@18cf84a5{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:43,972 INFO [RS:2;jenkins-hbase4:41625] server.AbstractConnector(383): Stopped ServerConnector@56e7872c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 22:15:43,972 INFO [RS:0;jenkins-hbase4:42021] server.AbstractConnector(383): Stopped ServerConnector@3616823c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 22:15:43,972 INFO [RS:3;jenkins-hbase4:34803] server.AbstractConnector(383): Stopped ServerConnector@77e30ea5{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 22:15:43,972 INFO [RS:1;jenkins-hbase4:34647] server.AbstractConnector(383): Stopped ServerConnector@22f55d63{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 22:15:43,973 INFO [RS:3;jenkins-hbase4:34803] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 22:15:43,973 INFO [RS:0;jenkins-hbase4:42021] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 22:15:43,972 INFO [RS:2;jenkins-hbase4:41625] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 22:15:43,973 INFO [RS:3;jenkins-hbase4:34803] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1755bd06{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 22:15:43,975 INFO [RS:2;jenkins-hbase4:41625] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5c1fc007{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 22:15:43,976 INFO [RS:3;jenkins-hbase4:34803] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@623affc3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/hadoop.log.dir/,STOPPED} 2023-07-17 22:15:43,973 INFO [RS:1;jenkins-hbase4:34647] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 22:15:43,976 INFO [RS:2;jenkins-hbase4:41625] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@ac4d373{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/hadoop.log.dir/,STOPPED} 2023-07-17 22:15:43,974 INFO [RS:0;jenkins-hbase4:42021] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2d793d90{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 22:15:43,977 INFO [RS:1;jenkins-hbase4:34647] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@532be8b6{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 22:15:43,978 INFO [RS:0;jenkins-hbase4:42021] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3063b687{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/hadoop.log.dir/,STOPPED} 2023-07-17 22:15:43,977 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-17 22:15:43,979 INFO [RS:1;jenkins-hbase4:34647] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@33aee7d7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/hadoop.log.dir/,STOPPED} 2023-07-17 22:15:43,979 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-17 22:15:43,980 INFO [RS:3;jenkins-hbase4:34803] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 22:15:43,980 INFO [RS:3;jenkins-hbase4:34803] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 22:15:43,980 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 22:15:43,980 INFO [RS:3;jenkins-hbase4:34803] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 22:15:43,980 INFO [RS:1;jenkins-hbase4:34647] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 22:15:43,981 INFO [RS:3;jenkins-hbase4:34803] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:43,981 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 22:15:43,981 INFO [RS:0;jenkins-hbase4:42021] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 22:15:43,981 DEBUG [RS:3;jenkins-hbase4:34803] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6b9368bf to 127.0.0.1:57139 2023-07-17 22:15:43,981 INFO [RS:2;jenkins-hbase4:41625] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 22:15:43,981 DEBUG [RS:3;jenkins-hbase4:34803] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:43,981 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 22:15:43,981 INFO [RS:0;jenkins-hbase4:42021] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 22:15:43,982 INFO [RS:2;jenkins-hbase4:41625] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 22:15:43,981 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 22:15:43,982 INFO [RS:2;jenkins-hbase4:41625] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 22:15:43,982 INFO [RS:0;jenkins-hbase4:42021] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 22:15:43,981 INFO [RS:3;jenkins-hbase4:34803] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34803,1689632122825; all regions closed. 2023-07-17 22:15:43,982 INFO [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(3305): Received CLOSE for 26133ff4db8e1a874ad4b8256a8d5ff5 2023-07-17 22:15:43,982 INFO [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(3305): Received CLOSE for 50dfbd4291683110d06a43487ab94cb0 2023-07-17 22:15:43,982 INFO [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer(3305): Received CLOSE for 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:43,982 INFO [RS:1;jenkins-hbase4:34647] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 22:15:43,983 INFO [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:43,983 DEBUG [RS:2;jenkins-hbase4:41625] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x301cf1f0 to 127.0.0.1:57139 2023-07-17 22:15:43,984 DEBUG [RS:2;jenkins-hbase4:41625] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:43,984 INFO [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-17 22:15:43,984 DEBUG [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer(1478): Online Regions={2b34f0020745232a8a57d9007f0d3248=testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248.} 2023-07-17 22:15:43,983 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 26133ff4db8e1a874ad4b8256a8d5ff5, disabling compactions & flushes 2023-07-17 22:15:43,982 INFO [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(3305): Received CLOSE for fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:43,984 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:43,983 INFO [RS:1;jenkins-hbase4:34647] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 22:15:43,984 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:43,984 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. after waiting 0 ms 2023-07-17 22:15:43,984 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:43,984 INFO [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:43,985 DEBUG [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer(1504): Waiting on 2b34f0020745232a8a57d9007f0d3248 2023-07-17 22:15:43,985 DEBUG [RS:0;jenkins-hbase4:42021] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3218058a to 127.0.0.1:57139 2023-07-17 22:15:43,985 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2b34f0020745232a8a57d9007f0d3248, disabling compactions & flushes 2023-07-17 22:15:43,985 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:43,985 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:43,985 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. after waiting 0 ms 2023-07-17 22:15:43,985 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:43,985 DEBUG [RS:0;jenkins-hbase4:42021] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:43,986 INFO [RS:0;jenkins-hbase4:42021] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 22:15:43,986 INFO [RS:0;jenkins-hbase4:42021] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 22:15:43,986 INFO [RS:0;jenkins-hbase4:42021] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 22:15:43,986 INFO [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-17 22:15:43,984 INFO [RS:1;jenkins-hbase4:34647] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:43,986 DEBUG [RS:1;jenkins-hbase4:34647] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1e6a0baf to 127.0.0.1:57139 2023-07-17 22:15:43,987 DEBUG [RS:1;jenkins-hbase4:34647] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:43,987 INFO [RS:1;jenkins-hbase4:34647] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34647,1689632118064; all regions closed. 2023-07-17 22:15:43,987 INFO [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-17 22:15:43,987 DEBUG [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(1478): Online Regions={26133ff4db8e1a874ad4b8256a8d5ff5=unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5., 50dfbd4291683110d06a43487ab94cb0=hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0., fdcdbf251438e26cb4d3816e7324408a=hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a., 1588230740=hbase:meta,,1.1588230740} 2023-07-17 22:15:43,990 DEBUG [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(1504): Waiting on 1588230740, 26133ff4db8e1a874ad4b8256a8d5ff5, 50dfbd4291683110d06a43487ab94cb0, fdcdbf251438e26cb4d3816e7324408a 2023-07-17 22:15:43,991 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-17 22:15:43,991 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-17 22:15:43,991 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-17 22:15:43,991 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-17 22:15:43,991 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-17 22:15:43,991 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=38.63 KB heapSize=63 KB 2023-07-17 22:15:43,993 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:43,993 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:44,003 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:44,003 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:44,003 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/testRename/2b34f0020745232a8a57d9007f0d3248/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-17 22:15:44,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:44,004 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2b34f0020745232a8a57d9007f0d3248: 2023-07-17 22:15:44,005 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689632137185.2b34f0020745232a8a57d9007f0d3248. 2023-07-17 22:15:44,005 DEBUG [RS:3;jenkins-hbase4:34803] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/oldWALs 2023-07-17 22:15:44,005 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/default/unmovedTable/26133ff4db8e1a874ad4b8256a8d5ff5/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-17 22:15:44,006 INFO [RS:3;jenkins-hbase4:34803] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34803%2C1689632122825:(num 1689632123120) 2023-07-17 22:15:44,006 DEBUG [RS:3;jenkins-hbase4:34803] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:44,006 DEBUG [RS:1;jenkins-hbase4:34647] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/oldWALs 2023-07-17 22:15:44,006 INFO [RS:3;jenkins-hbase4:34803] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:44,006 INFO [RS:1;jenkins-hbase4:34647] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34647%2C1689632118064:(num 1689632120659) 2023-07-17 22:15:44,007 DEBUG [RS:1;jenkins-hbase4:34647] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:44,007 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:44,007 INFO [RS:3;jenkins-hbase4:34803] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-17 22:15:44,007 INFO [RS:1;jenkins-hbase4:34647] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:44,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 26133ff4db8e1a874ad4b8256a8d5ff5: 2023-07-17 22:15:44,007 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 22:15:44,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689632138847.26133ff4db8e1a874ad4b8256a8d5ff5. 2023-07-17 22:15:44,007 INFO [RS:3;jenkins-hbase4:34803] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 22:15:44,007 INFO [RS:3;jenkins-hbase4:34803] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 22:15:44,007 INFO [RS:3;jenkins-hbase4:34803] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 22:15:44,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 50dfbd4291683110d06a43487ab94cb0, disabling compactions & flushes 2023-07-17 22:15:44,008 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. 2023-07-17 22:15:44,008 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. 2023-07-17 22:15:44,008 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. after waiting 0 ms 2023-07-17 22:15:44,008 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. 2023-07-17 22:15:44,008 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 50dfbd4291683110d06a43487ab94cb0 1/1 column families, dataSize=22.09 KB heapSize=36.55 KB 2023-07-17 22:15:44,009 INFO [RS:1;jenkins-hbase4:34647] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-17 22:15:44,010 INFO [RS:1;jenkins-hbase4:34647] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 22:15:44,010 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 22:15:44,010 INFO [RS:1;jenkins-hbase4:34647] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 22:15:44,011 INFO [RS:1;jenkins-hbase4:34647] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 22:15:44,011 INFO [RS:3;jenkins-hbase4:34803] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34803 2023-07-17 22:15:44,012 INFO [RS:1;jenkins-hbase4:34647] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34647 2023-07-17 22:15:44,020 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:44,020 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:34647-0x101755a8bb70002, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:44,020 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:44,020 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:34647-0x101755a8bb70002, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:44,020 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:34647-0x101755a8bb70002, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:44,020 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:44,021 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:34803-0x101755a8bb7000b, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34803,1689632122825 2023-07-17 22:15:44,021 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:44,021 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:34803-0x101755a8bb7000b, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:44,020 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:44,021 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:34803-0x101755a8bb7000b, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:44,021 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:44,021 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34803,1689632122825] 2023-07-17 22:15:44,021 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34647,1689632118064 2023-07-17 22:15:44,021 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34803,1689632122825; numProcessing=1 2023-07-17 22:15:44,023 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34803,1689632122825 already deleted, retry=false 2023-07-17 22:15:44,023 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34803,1689632122825 expired; onlineServers=3 2023-07-17 22:15:44,023 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34647,1689632118064] 2023-07-17 22:15:44,023 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34647,1689632118064; numProcessing=2 2023-07-17 22:15:44,038 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=35.70 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/.tmp/info/0dde7116fa744e4989849a614bdf0176 2023-07-17 22:15:44,050 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.09 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0/.tmp/m/7a5514b9b0b4416aa491717b353c5bb2 2023-07-17 22:15:44,053 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0dde7116fa744e4989849a614bdf0176 2023-07-17 22:15:44,056 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7a5514b9b0b4416aa491717b353c5bb2 2023-07-17 22:15:44,057 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0/.tmp/m/7a5514b9b0b4416aa491717b353c5bb2 as hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0/m/7a5514b9b0b4416aa491717b353c5bb2 2023-07-17 22:15:44,063 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7a5514b9b0b4416aa491717b353c5bb2 2023-07-17 22:15:44,063 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0/m/7a5514b9b0b4416aa491717b353c5bb2, entries=22, sequenceid=101, filesize=5.9 K 2023-07-17 22:15:44,064 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.09 KB/22618, heapSize ~36.53 KB/37408, currentSize=0 B/0 for 50dfbd4291683110d06a43487ab94cb0 in 56ms, sequenceid=101, compaction requested=false 2023-07-17 22:15:44,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/rsgroup/50dfbd4291683110d06a43487ab94cb0/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=29 2023-07-17 22:15:44,079 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/.tmp/rep_barrier/7b96a64cab06417ea9c06328f84775ca 2023-07-17 22:15:44,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-17 22:15:44,080 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. 2023-07-17 22:15:44,080 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 50dfbd4291683110d06a43487ab94cb0: 2023-07-17 22:15:44,080 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689632122118.50dfbd4291683110d06a43487ab94cb0. 2023-07-17 22:15:44,080 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fdcdbf251438e26cb4d3816e7324408a, disabling compactions & flushes 2023-07-17 22:15:44,080 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. 2023-07-17 22:15:44,080 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. 2023-07-17 22:15:44,080 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. after waiting 0 ms 2023-07-17 22:15:44,081 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. 2023-07-17 22:15:44,085 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/namespace/fdcdbf251438e26cb4d3816e7324408a/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-17 22:15:44,087 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7b96a64cab06417ea9c06328f84775ca 2023-07-17 22:15:44,089 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. 2023-07-17 22:15:44,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fdcdbf251438e26cb4d3816e7324408a: 2023-07-17 22:15:44,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689632121315.fdcdbf251438e26cb4d3816e7324408a. 2023-07-17 22:15:44,102 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/.tmp/table/745606e6607e4f8ba76f06c1d9e5e7cf 2023-07-17 22:15:44,107 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 745606e6607e4f8ba76f06c1d9e5e7cf 2023-07-17 22:15:44,108 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/.tmp/info/0dde7116fa744e4989849a614bdf0176 as hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/info/0dde7116fa744e4989849a614bdf0176 2023-07-17 22:15:44,113 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0dde7116fa744e4989849a614bdf0176 2023-07-17 22:15:44,113 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/info/0dde7116fa744e4989849a614bdf0176, entries=72, sequenceid=210, filesize=13.1 K 2023-07-17 22:15:44,114 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/.tmp/rep_barrier/7b96a64cab06417ea9c06328f84775ca as hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/rep_barrier/7b96a64cab06417ea9c06328f84775ca 2023-07-17 22:15:44,119 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7b96a64cab06417ea9c06328f84775ca 2023-07-17 22:15:44,119 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/rep_barrier/7b96a64cab06417ea9c06328f84775ca, entries=8, sequenceid=210, filesize=5.8 K 2023-07-17 22:15:44,120 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/.tmp/table/745606e6607e4f8ba76f06c1d9e5e7cf as hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/table/745606e6607e4f8ba76f06c1d9e5e7cf 2023-07-17 22:15:44,124 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:34647-0x101755a8bb70002, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:44,124 INFO [RS:1;jenkins-hbase4:34647] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34647,1689632118064; zookeeper connection closed. 2023-07-17 22:15:44,125 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:34647-0x101755a8bb70002, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:44,125 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@410d6e75] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@410d6e75 2023-07-17 22:15:44,126 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34647,1689632118064 already deleted, retry=false 2023-07-17 22:15:44,126 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34647,1689632118064 expired; onlineServers=2 2023-07-17 22:15:44,127 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 745606e6607e4f8ba76f06c1d9e5e7cf 2023-07-17 22:15:44,127 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/table/745606e6607e4f8ba76f06c1d9e5e7cf, entries=16, sequenceid=210, filesize=6.0 K 2023-07-17 22:15:44,128 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~38.63 KB/39552, heapSize ~62.95 KB/64464, currentSize=0 B/0 for 1588230740 in 136ms, sequenceid=210, compaction requested=false 2023-07-17 22:15:44,138 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/data/hbase/meta/1588230740/recovered.edits/213.seqid, newMaxSeqId=213, maxSeqId=95 2023-07-17 22:15:44,139 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-17 22:15:44,139 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-17 22:15:44,139 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-17 22:15:44,140 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-17 22:15:44,147 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:34803-0x101755a8bb7000b, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:44,147 INFO [RS:3;jenkins-hbase4:34803] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34803,1689632122825; zookeeper connection closed. 2023-07-17 22:15:44,147 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:34803-0x101755a8bb7000b, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:44,147 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7cfbce9d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7cfbce9d 2023-07-17 22:15:44,185 INFO [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41625,1689632118141; all regions closed. 2023-07-17 22:15:44,191 INFO [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42021,1689632117931; all regions closed. 2023-07-17 22:15:44,193 DEBUG [RS:2;jenkins-hbase4:41625] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/oldWALs 2023-07-17 22:15:44,193 INFO [RS:2;jenkins-hbase4:41625] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41625%2C1689632118141.meta:.meta(num 1689632121046) 2023-07-17 22:15:44,199 DEBUG [RS:0;jenkins-hbase4:42021] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/oldWALs 2023-07-17 22:15:44,199 INFO [RS:0;jenkins-hbase4:42021] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42021%2C1689632117931.meta:.meta(num 1689632128560) 2023-07-17 22:15:44,203 DEBUG [RS:2;jenkins-hbase4:41625] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/oldWALs 2023-07-17 22:15:44,203 INFO [RS:2;jenkins-hbase4:41625] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41625%2C1689632118141:(num 1689632120692) 2023-07-17 22:15:44,203 DEBUG [RS:2;jenkins-hbase4:41625] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:44,203 INFO [RS:2;jenkins-hbase4:41625] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:44,203 INFO [RS:2;jenkins-hbase4:41625] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-17 22:15:44,203 INFO [RS:2;jenkins-hbase4:41625] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 22:15:44,203 INFO [RS:2;jenkins-hbase4:41625] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 22:15:44,203 INFO [RS:2;jenkins-hbase4:41625] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 22:15:44,203 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 22:15:44,204 INFO [RS:2;jenkins-hbase4:41625] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41625 2023-07-17 22:15:44,205 DEBUG [RS:0;jenkins-hbase4:42021] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/oldWALs 2023-07-17 22:15:44,205 INFO [RS:0;jenkins-hbase4:42021] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42021%2C1689632117931:(num 1689632120672) 2023-07-17 22:15:44,205 DEBUG [RS:0;jenkins-hbase4:42021] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:44,205 INFO [RS:0;jenkins-hbase4:42021] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:44,205 INFO [RS:0;jenkins-hbase4:42021] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-17 22:15:44,205 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 22:15:44,206 INFO [RS:0;jenkins-hbase4:42021] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42021 2023-07-17 22:15:44,209 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:44,209 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:44,209 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42021,1689632117931 2023-07-17 22:15:44,209 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:44,209 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41625,1689632118141 2023-07-17 22:15:44,210 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42021,1689632117931] 2023-07-17 22:15:44,210 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42021,1689632117931; numProcessing=3 2023-07-17 22:15:44,213 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42021,1689632117931 already deleted, retry=false 2023-07-17 22:15:44,213 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42021,1689632117931 expired; onlineServers=1 2023-07-17 22:15:44,213 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41625,1689632118141] 2023-07-17 22:15:44,213 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41625,1689632118141; numProcessing=4 2023-07-17 22:15:44,214 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41625,1689632118141 already deleted, retry=false 2023-07-17 22:15:44,214 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41625,1689632118141 expired; onlineServers=0 2023-07-17 22:15:44,214 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43315,1689632115843' ***** 2023-07-17 22:15:44,214 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-17 22:15:44,214 DEBUG [M:0;jenkins-hbase4:43315] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@28c18102, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 22:15:44,215 INFO [M:0;jenkins-hbase4:43315] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 22:15:44,216 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-17 22:15:44,216 INFO [M:0;jenkins-hbase4:43315] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@27ce108a{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-17 22:15:44,216 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:44,217 INFO [M:0;jenkins-hbase4:43315] server.AbstractConnector(383): Stopped ServerConnector@93a89c0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 22:15:44,217 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 22:15:44,217 INFO [M:0;jenkins-hbase4:43315] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 22:15:44,218 INFO [M:0;jenkins-hbase4:43315] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3e41c305{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 22:15:44,218 INFO [M:0;jenkins-hbase4:43315] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@32d90619{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/hadoop.log.dir/,STOPPED} 2023-07-17 22:15:44,219 INFO [M:0;jenkins-hbase4:43315] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43315,1689632115843 2023-07-17 22:15:44,219 INFO [M:0;jenkins-hbase4:43315] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43315,1689632115843; all regions closed. 2023-07-17 22:15:44,219 DEBUG [M:0;jenkins-hbase4:43315] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:44,219 INFO [M:0;jenkins-hbase4:43315] master.HMaster(1491): Stopping master jetty server 2023-07-17 22:15:44,219 INFO [M:0;jenkins-hbase4:43315] server.AbstractConnector(383): Stopped ServerConnector@ef9369a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 22:15:44,220 DEBUG [M:0;jenkins-hbase4:43315] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-17 22:15:44,220 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-17 22:15:44,220 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689632120015] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689632120015,5,FailOnTimeoutGroup] 2023-07-17 22:15:44,220 DEBUG [M:0;jenkins-hbase4:43315] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-17 22:15:44,220 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689632120015] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689632120015,5,FailOnTimeoutGroup] 2023-07-17 22:15:44,220 INFO [M:0;jenkins-hbase4:43315] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-17 22:15:44,220 INFO [M:0;jenkins-hbase4:43315] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-17 22:15:44,220 INFO [M:0;jenkins-hbase4:43315] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-17 22:15:44,220 DEBUG [M:0;jenkins-hbase4:43315] master.HMaster(1512): Stopping service threads 2023-07-17 22:15:44,220 INFO [M:0;jenkins-hbase4:43315] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-17 22:15:44,221 ERROR [M:0;jenkins-hbase4:43315] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-17 22:15:44,221 INFO [M:0;jenkins-hbase4:43315] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-17 22:15:44,221 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-17 22:15:44,222 DEBUG [M:0;jenkins-hbase4:43315] zookeeper.ZKUtil(398): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-17 22:15:44,222 WARN [M:0;jenkins-hbase4:43315] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-17 22:15:44,222 INFO [M:0;jenkins-hbase4:43315] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-17 22:15:44,222 INFO [M:0;jenkins-hbase4:43315] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-17 22:15:44,222 DEBUG [M:0;jenkins-hbase4:43315] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-17 22:15:44,222 INFO [M:0;jenkins-hbase4:43315] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:44,222 DEBUG [M:0;jenkins-hbase4:43315] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:44,222 DEBUG [M:0;jenkins-hbase4:43315] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-17 22:15:44,222 DEBUG [M:0;jenkins-hbase4:43315] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:44,222 INFO [M:0;jenkins-hbase4:43315] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=519.20 KB heapSize=621.30 KB 2023-07-17 22:15:44,236 INFO [M:0;jenkins-hbase4:43315] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=519.20 KB at sequenceid=1152 (bloomFilter=true), to=hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/156453538c3446f8888c96068a5114fa 2023-07-17 22:15:44,242 DEBUG [M:0;jenkins-hbase4:43315] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/156453538c3446f8888c96068a5114fa as hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/156453538c3446f8888c96068a5114fa 2023-07-17 22:15:44,247 INFO [M:0;jenkins-hbase4:43315] regionserver.HStore(1080): Added hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/156453538c3446f8888c96068a5114fa, entries=154, sequenceid=1152, filesize=27.1 K 2023-07-17 22:15:44,248 INFO [M:0;jenkins-hbase4:43315] regionserver.HRegion(2948): Finished flush of dataSize ~519.20 KB/531657, heapSize ~621.29 KB/636200, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 26ms, sequenceid=1152, compaction requested=false 2023-07-17 22:15:44,250 INFO [M:0;jenkins-hbase4:43315] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:44,250 DEBUG [M:0;jenkins-hbase4:43315] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-17 22:15:44,254 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 22:15:44,254 INFO [M:0;jenkins-hbase4:43315] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-17 22:15:44,255 INFO [M:0;jenkins-hbase4:43315] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43315 2023-07-17 22:15:44,258 DEBUG [M:0;jenkins-hbase4:43315] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,43315,1689632115843 already deleted, retry=false 2023-07-17 22:15:44,749 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:44,749 INFO [M:0;jenkins-hbase4:43315] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43315,1689632115843; zookeeper connection closed. 2023-07-17 22:15:44,749 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): master:43315-0x101755a8bb70000, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:44,849 INFO [RS:2;jenkins-hbase4:41625] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41625,1689632118141; zookeeper connection closed. 2023-07-17 22:15:44,849 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:44,849 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:41625-0x101755a8bb70003, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:44,849 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7a3c9ee] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7a3c9ee 2023-07-17 22:15:44,949 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:44,949 INFO [RS:0;jenkins-hbase4:42021] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42021,1689632117931; zookeeper connection closed. 2023-07-17 22:15:44,949 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): regionserver:42021-0x101755a8bb70001, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:44,949 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@79f62b5c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@79f62b5c 2023-07-17 22:15:44,950 INFO [Listener at localhost/37695] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-17 22:15:44,950 WARN [Listener at localhost/37695] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-17 22:15:44,962 INFO [Listener at localhost/37695] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 22:15:44,972 WARN [BP-690370225-172.31.14.131-1689632111991 heartbeating to localhost/127.0.0.1:38457] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-17 22:15:44,972 WARN [BP-690370225-172.31.14.131-1689632111991 heartbeating to localhost/127.0.0.1:38457] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-690370225-172.31.14.131-1689632111991 (Datanode Uuid 68ac323e-41c1-4b73-8ba0-ab0a78db4c28) service to localhost/127.0.0.1:38457 2023-07-17 22:15:44,975 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/cluster_2e37384a-437b-4b5a-b559-34afc86ec314/dfs/data/data5/current/BP-690370225-172.31.14.131-1689632111991] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 22:15:44,975 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/cluster_2e37384a-437b-4b5a-b559-34afc86ec314/dfs/data/data6/current/BP-690370225-172.31.14.131-1689632111991] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 22:15:44,977 WARN [Listener at localhost/37695] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-17 22:15:44,991 INFO [Listener at localhost/37695] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 22:15:45,095 WARN [BP-690370225-172.31.14.131-1689632111991 heartbeating to localhost/127.0.0.1:38457] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-17 22:15:45,095 WARN [BP-690370225-172.31.14.131-1689632111991 heartbeating to localhost/127.0.0.1:38457] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-690370225-172.31.14.131-1689632111991 (Datanode Uuid 63306b39-16dd-4949-bc27-618b5c64090d) service to localhost/127.0.0.1:38457 2023-07-17 22:15:45,096 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/cluster_2e37384a-437b-4b5a-b559-34afc86ec314/dfs/data/data3/current/BP-690370225-172.31.14.131-1689632111991] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 22:15:45,097 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/cluster_2e37384a-437b-4b5a-b559-34afc86ec314/dfs/data/data4/current/BP-690370225-172.31.14.131-1689632111991] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 22:15:45,098 WARN [Listener at localhost/37695] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-17 22:15:45,107 INFO [Listener at localhost/37695] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 22:15:45,212 WARN [BP-690370225-172.31.14.131-1689632111991 heartbeating to localhost/127.0.0.1:38457] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-17 22:15:45,212 WARN [BP-690370225-172.31.14.131-1689632111991 heartbeating to localhost/127.0.0.1:38457] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-690370225-172.31.14.131-1689632111991 (Datanode Uuid 3621efb1-40cf-4c42-b131-74ca6a4c5501) service to localhost/127.0.0.1:38457 2023-07-17 22:15:45,212 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/cluster_2e37384a-437b-4b5a-b559-34afc86ec314/dfs/data/data1/current/BP-690370225-172.31.14.131-1689632111991] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 22:15:45,213 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/cluster_2e37384a-437b-4b5a-b559-34afc86ec314/dfs/data/data2/current/BP-690370225-172.31.14.131-1689632111991] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 22:15:45,263 INFO [Listener at localhost/37695] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 22:15:45,292 INFO [Listener at localhost/37695] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-17 22:15:45,348 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-17 22:15:45,348 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-17 22:15:45,348 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/hadoop.log.dir so I do NOT create it in target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739 2023-07-17 22:15:45,348 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/794c75f5-94a6-97a7-73a0-371fe56230e9/hadoop.tmp.dir so I do NOT create it in target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739 2023-07-17 22:15:45,348 INFO [Listener at localhost/37695] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/cluster_71a8ab50-1d50-f95a-46ce-9175e575a3cb, deleteOnExit=true 2023-07-17 22:15:45,349 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-17 22:15:45,349 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/test.cache.data in system properties and HBase conf 2023-07-17 22:15:45,349 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/hadoop.tmp.dir in system properties and HBase conf 2023-07-17 22:15:45,349 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/hadoop.log.dir in system properties and HBase conf 2023-07-17 22:15:45,349 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-17 22:15:45,349 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-17 22:15:45,349 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-17 22:15:45,349 DEBUG [Listener at localhost/37695] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-17 22:15:45,350 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-17 22:15:45,350 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-17 22:15:45,350 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-17 22:15:45,350 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-17 22:15:45,350 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-17 22:15:45,350 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-17 22:15:45,350 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-17 22:15:45,350 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-17 22:15:45,350 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-17 22:15:45,351 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/nfs.dump.dir in system properties and HBase conf 2023-07-17 22:15:45,351 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/java.io.tmpdir in system properties and HBase conf 2023-07-17 22:15:45,351 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-17 22:15:45,351 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-17 22:15:45,351 INFO [Listener at localhost/37695] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-17 22:15:45,356 WARN [Listener at localhost/37695] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-17 22:15:45,356 WARN [Listener at localhost/37695] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-17 22:15:45,388 DEBUG [Listener at localhost/37695-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101755a8bb7000a, quorum=127.0.0.1:57139, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-17 22:15:45,388 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101755a8bb7000a, quorum=127.0.0.1:57139, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-17 22:15:45,410 WARN [Listener at localhost/37695] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 22:15:45,412 INFO [Listener at localhost/37695] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 22:15:45,418 INFO [Listener at localhost/37695] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/java.io.tmpdir/Jetty_localhost_39083_hdfs____nz3k4v/webapp 2023-07-17 22:15:45,544 INFO [Listener at localhost/37695] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39083 2023-07-17 22:15:45,553 WARN [Listener at localhost/37695] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-17 22:15:45,554 WARN [Listener at localhost/37695] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-17 22:15:45,600 WARN [Listener at localhost/41705] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 22:15:45,615 WARN [Listener at localhost/41705] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-17 22:15:45,618 WARN [Listener at localhost/41705] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 22:15:45,619 INFO [Listener at localhost/41705] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 22:15:45,624 INFO [Listener at localhost/41705] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/java.io.tmpdir/Jetty_localhost_33933_datanode____8467hc/webapp 2023-07-17 22:15:45,726 INFO [Listener at localhost/41705] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33933 2023-07-17 22:15:45,733 WARN [Listener at localhost/32889] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 22:15:45,747 WARN [Listener at localhost/32889] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-17 22:15:45,748 WARN [Listener at localhost/32889] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 22:15:45,749 INFO [Listener at localhost/32889] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 22:15:45,753 INFO [Listener at localhost/32889] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/java.io.tmpdir/Jetty_localhost_46273_datanode____aagypf/webapp 2023-07-17 22:15:45,829 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe209e5a282ae21a7: Processing first storage report for DS-629e67b8-6430-49a8-b40e-339bcc2ffd0d from datanode 81997728-5ea6-46a7-88ba-03eb1aa91543 2023-07-17 22:15:45,830 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe209e5a282ae21a7: from storage DS-629e67b8-6430-49a8-b40e-339bcc2ffd0d node DatanodeRegistration(127.0.0.1:42283, datanodeUuid=81997728-5ea6-46a7-88ba-03eb1aa91543, infoPort=35629, infoSecurePort=0, ipcPort=32889, storageInfo=lv=-57;cid=testClusterID;nsid=94702594;c=1689632145360), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-17 22:15:45,830 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe209e5a282ae21a7: Processing first storage report for DS-6c493838-01d7-4c9d-a971-22ee1266f2b3 from datanode 81997728-5ea6-46a7-88ba-03eb1aa91543 2023-07-17 22:15:45,830 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe209e5a282ae21a7: from storage DS-6c493838-01d7-4c9d-a971-22ee1266f2b3 node DatanodeRegistration(127.0.0.1:42283, datanodeUuid=81997728-5ea6-46a7-88ba-03eb1aa91543, infoPort=35629, infoSecurePort=0, ipcPort=32889, storageInfo=lv=-57;cid=testClusterID;nsid=94702594;c=1689632145360), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 22:15:45,858 INFO [Listener at localhost/32889] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46273 2023-07-17 22:15:45,865 WARN [Listener at localhost/35293] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 22:15:45,881 WARN [Listener at localhost/35293] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-17 22:15:45,884 WARN [Listener at localhost/35293] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 22:15:45,886 INFO [Listener at localhost/35293] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 22:15:45,889 INFO [Listener at localhost/35293] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/java.io.tmpdir/Jetty_localhost_45965_datanode____.l6dmgy/webapp 2023-07-17 22:15:45,986 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbca2d8773beebc65: Processing first storage report for DS-932f9eb1-46bd-46be-a1e6-5521082e7678 from datanode 9eec64c9-37cc-4fe4-bac0-51330008f3c2 2023-07-17 22:15:45,986 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbca2d8773beebc65: from storage DS-932f9eb1-46bd-46be-a1e6-5521082e7678 node DatanodeRegistration(127.0.0.1:32853, datanodeUuid=9eec64c9-37cc-4fe4-bac0-51330008f3c2, infoPort=39019, infoSecurePort=0, ipcPort=35293, storageInfo=lv=-57;cid=testClusterID;nsid=94702594;c=1689632145360), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 22:15:45,986 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbca2d8773beebc65: Processing first storage report for DS-bbe171a2-af7c-406f-9704-597b2e4101f1 from datanode 9eec64c9-37cc-4fe4-bac0-51330008f3c2 2023-07-17 22:15:45,986 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbca2d8773beebc65: from storage DS-bbe171a2-af7c-406f-9704-597b2e4101f1 node DatanodeRegistration(127.0.0.1:32853, datanodeUuid=9eec64c9-37cc-4fe4-bac0-51330008f3c2, infoPort=39019, infoSecurePort=0, ipcPort=35293, storageInfo=lv=-57;cid=testClusterID;nsid=94702594;c=1689632145360), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 22:15:46,010 INFO [Listener at localhost/35293] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45965 2023-07-17 22:15:46,018 WARN [Listener at localhost/42151] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 22:15:46,047 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-17 22:15:46,047 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-17 22:15:46,047 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-17 22:15:46,127 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7ff21bfc62d0e29b: Processing first storage report for DS-ad51ec4a-a1ac-4487-af9e-8a0056e48aae from datanode 9ec323b8-b857-4dec-b84b-02d9a37a2db5 2023-07-17 22:15:46,127 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7ff21bfc62d0e29b: from storage DS-ad51ec4a-a1ac-4487-af9e-8a0056e48aae node DatanodeRegistration(127.0.0.1:42811, datanodeUuid=9ec323b8-b857-4dec-b84b-02d9a37a2db5, infoPort=40077, infoSecurePort=0, ipcPort=42151, storageInfo=lv=-57;cid=testClusterID;nsid=94702594;c=1689632145360), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 22:15:46,127 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7ff21bfc62d0e29b: Processing first storage report for DS-fb030e21-1edd-4d7a-a2eb-1c6c2023ceaf from datanode 9ec323b8-b857-4dec-b84b-02d9a37a2db5 2023-07-17 22:15:46,127 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7ff21bfc62d0e29b: from storage DS-fb030e21-1edd-4d7a-a2eb-1c6c2023ceaf node DatanodeRegistration(127.0.0.1:42811, datanodeUuid=9ec323b8-b857-4dec-b84b-02d9a37a2db5, infoPort=40077, infoSecurePort=0, ipcPort=42151, storageInfo=lv=-57;cid=testClusterID;nsid=94702594;c=1689632145360), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 22:15:46,129 DEBUG [Listener at localhost/42151] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739 2023-07-17 22:15:46,130 DEBUG [Listener at localhost/42151] zookeeper.MiniZooKeeperCluster(243): Failed binding ZK Server to client port: 52792 java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:461) at sun.nio.ch.Net.bind(Net.java:453) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:222) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:78) at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:687) at org.apache.zookeeper.server.ServerCnxnFactory.configure(ServerCnxnFactory.java:76) at org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:239) at org.apache.hadoop.hbase.HBaseZKTestingUtility.startMiniZKCluster(HBaseZKTestingUtility.java:129) at org.apache.hadoop.hbase.HBaseZKTestingUtility.startMiniZKCluster(HBaseZKTestingUtility.java:102) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1090) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1048) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.toggleQuotaCheckAndRestartMiniCluster(TestRSGroupsAdmin1.java:492) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.testRSGroupListDoesNotContainFailedTableCreation(TestRSGroupsAdmin1.java:410) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) 2023-07-17 22:15:46,132 INFO [Listener at localhost/42151] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/cluster_71a8ab50-1d50-f95a-46ce-9175e575a3cb/zookeeper_0, clientPort=52793, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/cluster_71a8ab50-1d50-f95a-46ce-9175e575a3cb/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/cluster_71a8ab50-1d50-f95a-46ce-9175e575a3cb/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-17 22:15:46,134 INFO [Listener at localhost/42151] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=52793 2023-07-17 22:15:46,134 INFO [Listener at localhost/42151] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:46,135 INFO [Listener at localhost/42151] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:46,157 INFO [Listener at localhost/42151] util.FSUtils(471): Created version file at hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500 with version=8 2023-07-17 22:15:46,158 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/hbase-staging 2023-07-17 22:15:46,159 DEBUG [Listener at localhost/42151] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-17 22:15:46,159 DEBUG [Listener at localhost/42151] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-17 22:15:46,159 DEBUG [Listener at localhost/42151] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-17 22:15:46,159 DEBUG [Listener at localhost/42151] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-17 22:15:46,160 INFO [Listener at localhost/42151] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 22:15:46,160 INFO [Listener at localhost/42151] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:46,160 INFO [Listener at localhost/42151] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:46,160 INFO [Listener at localhost/42151] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 22:15:46,160 INFO [Listener at localhost/42151] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:46,160 INFO [Listener at localhost/42151] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 22:15:46,161 INFO [Listener at localhost/42151] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 22:15:46,161 INFO [Listener at localhost/42151] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46299 2023-07-17 22:15:46,162 INFO [Listener at localhost/42151] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:46,163 INFO [Listener at localhost/42151] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:46,164 INFO [Listener at localhost/42151] zookeeper.RecoverableZooKeeper(93): Process identifier=master:46299 connecting to ZooKeeper ensemble=127.0.0.1:52793 2023-07-17 22:15:46,171 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:462990x0, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 22:15:46,172 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:46299-0x101755b05d70000 connected 2023-07-17 22:15:46,186 DEBUG [Listener at localhost/42151] zookeeper.ZKUtil(164): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 22:15:46,186 DEBUG [Listener at localhost/42151] zookeeper.ZKUtil(164): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:46,187 DEBUG [Listener at localhost/42151] zookeeper.ZKUtil(164): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 22:15:46,187 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46299 2023-07-17 22:15:46,187 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46299 2023-07-17 22:15:46,187 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46299 2023-07-17 22:15:46,188 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46299 2023-07-17 22:15:46,188 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46299 2023-07-17 22:15:46,190 INFO [Listener at localhost/42151] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 22:15:46,190 INFO [Listener at localhost/42151] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 22:15:46,190 INFO [Listener at localhost/42151] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 22:15:46,191 INFO [Listener at localhost/42151] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-17 22:15:46,191 INFO [Listener at localhost/42151] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 22:15:46,191 INFO [Listener at localhost/42151] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 22:15:46,191 INFO [Listener at localhost/42151] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 22:15:46,192 INFO [Listener at localhost/42151] http.HttpServer(1146): Jetty bound to port 33435 2023-07-17 22:15:46,192 INFO [Listener at localhost/42151] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 22:15:46,194 INFO [Listener at localhost/42151] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:46,194 INFO [Listener at localhost/42151] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@52285fac{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/hadoop.log.dir/,AVAILABLE} 2023-07-17 22:15:46,195 INFO [Listener at localhost/42151] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:46,195 INFO [Listener at localhost/42151] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6dbadf33{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 22:15:46,206 INFO [Listener at localhost/42151] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 22:15:46,207 INFO [Listener at localhost/42151] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 22:15:46,207 INFO [Listener at localhost/42151] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 22:15:46,208 INFO [Listener at localhost/42151] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-17 22:15:46,209 INFO [Listener at localhost/42151] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:46,210 INFO [Listener at localhost/42151] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@342ff873{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-17 22:15:46,211 INFO [Listener at localhost/42151] server.AbstractConnector(333): Started ServerConnector@23515fca{HTTP/1.1, (http/1.1)}{0.0.0.0:33435} 2023-07-17 22:15:46,212 INFO [Listener at localhost/42151] server.Server(415): Started @36485ms 2023-07-17 22:15:46,212 INFO [Listener at localhost/42151] master.HMaster(444): hbase.rootdir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500, hbase.cluster.distributed=false 2023-07-17 22:15:46,228 INFO [Listener at localhost/42151] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 22:15:46,228 INFO [Listener at localhost/42151] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:46,228 INFO [Listener at localhost/42151] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:46,228 INFO [Listener at localhost/42151] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 22:15:46,228 INFO [Listener at localhost/42151] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:46,228 INFO [Listener at localhost/42151] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 22:15:46,228 INFO [Listener at localhost/42151] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 22:15:46,229 INFO [Listener at localhost/42151] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46645 2023-07-17 22:15:46,229 INFO [Listener at localhost/42151] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 22:15:46,230 DEBUG [Listener at localhost/42151] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 22:15:46,230 INFO [Listener at localhost/42151] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:46,231 INFO [Listener at localhost/42151] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:46,232 INFO [Listener at localhost/42151] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46645 connecting to ZooKeeper ensemble=127.0.0.1:52793 2023-07-17 22:15:46,235 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:466450x0, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 22:15:46,236 DEBUG [Listener at localhost/42151] zookeeper.ZKUtil(164): regionserver:466450x0, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 22:15:46,237 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46645-0x101755b05d70001 connected 2023-07-17 22:15:46,237 DEBUG [Listener at localhost/42151] zookeeper.ZKUtil(164): regionserver:46645-0x101755b05d70001, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:46,237 DEBUG [Listener at localhost/42151] zookeeper.ZKUtil(164): regionserver:46645-0x101755b05d70001, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 22:15:46,238 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46645 2023-07-17 22:15:46,238 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46645 2023-07-17 22:15:46,239 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46645 2023-07-17 22:15:46,239 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46645 2023-07-17 22:15:46,239 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46645 2023-07-17 22:15:46,242 INFO [Listener at localhost/42151] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 22:15:46,242 INFO [Listener at localhost/42151] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 22:15:46,242 INFO [Listener at localhost/42151] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 22:15:46,242 INFO [Listener at localhost/42151] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 22:15:46,242 INFO [Listener at localhost/42151] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 22:15:46,242 INFO [Listener at localhost/42151] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 22:15:46,243 INFO [Listener at localhost/42151] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 22:15:46,244 INFO [Listener at localhost/42151] http.HttpServer(1146): Jetty bound to port 45541 2023-07-17 22:15:46,244 INFO [Listener at localhost/42151] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 22:15:46,247 INFO [Listener at localhost/42151] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:46,247 INFO [Listener at localhost/42151] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4028bb16{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/hadoop.log.dir/,AVAILABLE} 2023-07-17 22:15:46,247 INFO [Listener at localhost/42151] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:46,247 INFO [Listener at localhost/42151] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5033e558{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 22:15:46,254 INFO [Listener at localhost/42151] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 22:15:46,255 INFO [Listener at localhost/42151] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 22:15:46,255 INFO [Listener at localhost/42151] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 22:15:46,255 INFO [Listener at localhost/42151] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-17 22:15:46,258 INFO [Listener at localhost/42151] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:46,259 INFO [Listener at localhost/42151] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@75d7c124{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:46,260 INFO [Listener at localhost/42151] server.AbstractConnector(333): Started ServerConnector@10a92bd3{HTTP/1.1, (http/1.1)}{0.0.0.0:45541} 2023-07-17 22:15:46,260 INFO [Listener at localhost/42151] server.Server(415): Started @36533ms 2023-07-17 22:15:46,271 INFO [Listener at localhost/42151] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 22:15:46,271 INFO [Listener at localhost/42151] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:46,272 INFO [Listener at localhost/42151] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:46,272 INFO [Listener at localhost/42151] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 22:15:46,272 INFO [Listener at localhost/42151] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:46,272 INFO [Listener at localhost/42151] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 22:15:46,272 INFO [Listener at localhost/42151] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 22:15:46,273 INFO [Listener at localhost/42151] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41139 2023-07-17 22:15:46,273 INFO [Listener at localhost/42151] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 22:15:46,274 DEBUG [Listener at localhost/42151] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 22:15:46,275 INFO [Listener at localhost/42151] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:46,276 INFO [Listener at localhost/42151] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:46,277 INFO [Listener at localhost/42151] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41139 connecting to ZooKeeper ensemble=127.0.0.1:52793 2023-07-17 22:15:46,281 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:411390x0, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 22:15:46,282 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41139-0x101755b05d70002 connected 2023-07-17 22:15:46,282 DEBUG [Listener at localhost/42151] zookeeper.ZKUtil(164): regionserver:41139-0x101755b05d70002, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 22:15:46,283 DEBUG [Listener at localhost/42151] zookeeper.ZKUtil(164): regionserver:41139-0x101755b05d70002, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:46,283 DEBUG [Listener at localhost/42151] zookeeper.ZKUtil(164): regionserver:41139-0x101755b05d70002, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 22:15:46,287 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41139 2023-07-17 22:15:46,287 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41139 2023-07-17 22:15:46,290 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41139 2023-07-17 22:15:46,291 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41139 2023-07-17 22:15:46,292 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41139 2023-07-17 22:15:46,294 INFO [Listener at localhost/42151] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 22:15:46,294 INFO [Listener at localhost/42151] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 22:15:46,294 INFO [Listener at localhost/42151] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 22:15:46,295 INFO [Listener at localhost/42151] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 22:15:46,295 INFO [Listener at localhost/42151] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 22:15:46,295 INFO [Listener at localhost/42151] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 22:15:46,295 INFO [Listener at localhost/42151] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 22:15:46,296 INFO [Listener at localhost/42151] http.HttpServer(1146): Jetty bound to port 34917 2023-07-17 22:15:46,296 INFO [Listener at localhost/42151] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 22:15:46,299 INFO [Listener at localhost/42151] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:46,300 INFO [Listener at localhost/42151] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@885bf81{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/hadoop.log.dir/,AVAILABLE} 2023-07-17 22:15:46,300 INFO [Listener at localhost/42151] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:46,300 INFO [Listener at localhost/42151] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2d1cc2f3{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 22:15:46,306 INFO [Listener at localhost/42151] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 22:15:46,307 INFO [Listener at localhost/42151] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 22:15:46,307 INFO [Listener at localhost/42151] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 22:15:46,307 INFO [Listener at localhost/42151] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-17 22:15:46,313 INFO [Listener at localhost/42151] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:46,314 INFO [Listener at localhost/42151] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1f9228ec{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:46,316 INFO [Listener at localhost/42151] server.AbstractConnector(333): Started ServerConnector@63475078{HTTP/1.1, (http/1.1)}{0.0.0.0:34917} 2023-07-17 22:15:46,316 INFO [Listener at localhost/42151] server.Server(415): Started @36589ms 2023-07-17 22:15:46,327 INFO [Listener at localhost/42151] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 22:15:46,327 INFO [Listener at localhost/42151] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:46,327 INFO [Listener at localhost/42151] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:46,327 INFO [Listener at localhost/42151] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 22:15:46,327 INFO [Listener at localhost/42151] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:46,328 INFO [Listener at localhost/42151] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 22:15:46,328 INFO [Listener at localhost/42151] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 22:15:46,328 INFO [Listener at localhost/42151] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44063 2023-07-17 22:15:46,329 INFO [Listener at localhost/42151] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 22:15:46,330 DEBUG [Listener at localhost/42151] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 22:15:46,331 INFO [Listener at localhost/42151] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:46,331 INFO [Listener at localhost/42151] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:46,332 INFO [Listener at localhost/42151] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44063 connecting to ZooKeeper ensemble=127.0.0.1:52793 2023-07-17 22:15:46,336 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:440630x0, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 22:15:46,338 DEBUG [Listener at localhost/42151] zookeeper.ZKUtil(164): regionserver:440630x0, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 22:15:46,339 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44063-0x101755b05d70003 connected 2023-07-17 22:15:46,339 DEBUG [Listener at localhost/42151] zookeeper.ZKUtil(164): regionserver:44063-0x101755b05d70003, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:46,340 DEBUG [Listener at localhost/42151] zookeeper.ZKUtil(164): regionserver:44063-0x101755b05d70003, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 22:15:46,342 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44063 2023-07-17 22:15:46,342 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44063 2023-07-17 22:15:46,343 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44063 2023-07-17 22:15:46,344 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44063 2023-07-17 22:15:46,346 DEBUG [Listener at localhost/42151] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44063 2023-07-17 22:15:46,348 INFO [Listener at localhost/42151] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 22:15:46,348 INFO [Listener at localhost/42151] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 22:15:46,348 INFO [Listener at localhost/42151] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 22:15:46,349 INFO [Listener at localhost/42151] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 22:15:46,349 INFO [Listener at localhost/42151] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 22:15:46,349 INFO [Listener at localhost/42151] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 22:15:46,349 INFO [Listener at localhost/42151] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 22:15:46,350 INFO [Listener at localhost/42151] http.HttpServer(1146): Jetty bound to port 39827 2023-07-17 22:15:46,350 INFO [Listener at localhost/42151] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 22:15:46,354 INFO [Listener at localhost/42151] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:46,354 INFO [Listener at localhost/42151] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@72d7b0a9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/hadoop.log.dir/,AVAILABLE} 2023-07-17 22:15:46,354 INFO [Listener at localhost/42151] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:46,354 INFO [Listener at localhost/42151] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1b8bc652{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 22:15:46,359 INFO [Listener at localhost/42151] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 22:15:46,359 INFO [Listener at localhost/42151] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 22:15:46,359 INFO [Listener at localhost/42151] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 22:15:46,360 INFO [Listener at localhost/42151] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-17 22:15:46,360 INFO [Listener at localhost/42151] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:46,361 INFO [Listener at localhost/42151] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4e7f4242{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:46,362 INFO [Listener at localhost/42151] server.AbstractConnector(333): Started ServerConnector@2c408b4b{HTTP/1.1, (http/1.1)}{0.0.0.0:39827} 2023-07-17 22:15:46,362 INFO [Listener at localhost/42151] server.Server(415): Started @36635ms 2023-07-17 22:15:46,364 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 22:15:46,369 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@21d0e896{HTTP/1.1, (http/1.1)}{0.0.0.0:45985} 2023-07-17 22:15:46,369 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @36642ms 2023-07-17 22:15:46,369 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,46299,1689632146159 2023-07-17 22:15:46,370 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-17 22:15:46,371 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,46299,1689632146159 2023-07-17 22:15:46,372 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:46645-0x101755b05d70001, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 22:15:46,372 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:41139-0x101755b05d70002, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 22:15:46,372 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 22:15:46,373 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:44063-0x101755b05d70003, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 22:15:46,373 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:46,374 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-17 22:15:46,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,46299,1689632146159 from backup master directory 2023-07-17 22:15:46,375 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-17 22:15:46,377 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,46299,1689632146159 2023-07-17 22:15:46,377 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 22:15:46,377 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-17 22:15:46,377 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,46299,1689632146159 2023-07-17 22:15:46,393 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/hbase.id with ID: 56e44d19-98cd-47f5-a0e8-cc7ffaceab2e 2023-07-17 22:15:46,405 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:46,408 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:46,420 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x06470caf to 127.0.0.1:52793 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:46,425 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@47ed5a1f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:46,425 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:46,426 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-17 22:15:46,426 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 22:15:46,427 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/MasterData/data/master/store-tmp 2023-07-17 22:15:46,436 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:46,436 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-17 22:15:46,436 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:46,436 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:46,436 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-17 22:15:46,436 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:46,436 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:46,436 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-17 22:15:46,436 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/MasterData/WALs/jenkins-hbase4.apache.org,46299,1689632146159 2023-07-17 22:15:46,439 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46299%2C1689632146159, suffix=, logDir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/MasterData/WALs/jenkins-hbase4.apache.org,46299,1689632146159, archiveDir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/MasterData/oldWALs, maxLogs=10 2023-07-17 22:15:46,454 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42283,DS-629e67b8-6430-49a8-b40e-339bcc2ffd0d,DISK] 2023-07-17 22:15:46,454 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42811,DS-ad51ec4a-a1ac-4487-af9e-8a0056e48aae,DISK] 2023-07-17 22:15:46,454 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32853,DS-932f9eb1-46bd-46be-a1e6-5521082e7678,DISK] 2023-07-17 22:15:46,457 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/MasterData/WALs/jenkins-hbase4.apache.org,46299,1689632146159/jenkins-hbase4.apache.org%2C46299%2C1689632146159.1689632146439 2023-07-17 22:15:46,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42283,DS-629e67b8-6430-49a8-b40e-339bcc2ffd0d,DISK], DatanodeInfoWithStorage[127.0.0.1:32853,DS-932f9eb1-46bd-46be-a1e6-5521082e7678,DISK], DatanodeInfoWithStorage[127.0.0.1:42811,DS-ad51ec4a-a1ac-4487-af9e-8a0056e48aae,DISK]] 2023-07-17 22:15:46,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:46,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:46,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-17 22:15:46,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-17 22:15:46,460 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-17 22:15:46,461 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-17 22:15:46,462 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-17 22:15:46,462 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:46,463 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-17 22:15:46,463 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-17 22:15:46,466 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-17 22:15:46,470 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:46,471 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11204383840, jitterRate=0.04348956048488617}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:46,471 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-17 22:15:46,471 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-17 22:15:46,473 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-17 22:15:46,473 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-17 22:15:46,473 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-17 22:15:46,473 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-17 22:15:46,474 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-17 22:15:46,474 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-17 22:15:46,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-17 22:15:46,475 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-17 22:15:46,476 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-17 22:15:46,476 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-17 22:15:46,477 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-17 22:15:46,482 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:46,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-17 22:15:46,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-17 22:15:46,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-17 22:15:46,485 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:44063-0x101755b05d70003, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:46,485 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:41139-0x101755b05d70002, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:46,485 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:46645-0x101755b05d70001, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:46,485 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:46,486 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:46,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,46299,1689632146159, sessionid=0x101755b05d70000, setting cluster-up flag (Was=false) 2023-07-17 22:15:46,492 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:46,498 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-17 22:15:46,499 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46299,1689632146159 2023-07-17 22:15:46,502 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:46,507 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-17 22:15:46,508 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46299,1689632146159 2023-07-17 22:15:46,509 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.hbase-snapshot/.tmp 2023-07-17 22:15:46,510 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-17 22:15:46,511 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-17 22:15:46,512 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-17 22:15:46,512 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-17 22:15:46,512 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-17 22:15:46,513 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46299,1689632146159] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 22:15:46,514 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-17 22:15:46,525 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-17 22:15:46,525 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-17 22:15:46,525 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-17 22:15:46,525 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-17 22:15:46,525 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 22:15:46,525 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 22:15:46,525 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 22:15:46,525 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 22:15:46,525 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-17 22:15:46,525 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,525 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 22:15:46,525 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,528 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689632176528 2023-07-17 22:15:46,529 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-17 22:15:46,529 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-17 22:15:46,529 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-17 22:15:46,529 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-17 22:15:46,529 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-17 22:15:46,529 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-17 22:15:46,532 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,532 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-17 22:15:46,532 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-17 22:15:46,533 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-17 22:15:46,533 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-17 22:15:46,533 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-17 22:15:46,534 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-17 22:15:46,534 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-17 22:15:46,534 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:46,534 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689632146534,5,FailOnTimeoutGroup] 2023-07-17 22:15:46,541 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689632146534,5,FailOnTimeoutGroup] 2023-07-17 22:15:46,541 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-17 22:15:46,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,565 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:46,565 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:46,565 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500 2023-07-17 22:15:46,568 INFO [RS:0;jenkins-hbase4:46645] regionserver.HRegionServer(951): ClusterId : 56e44d19-98cd-47f5-a0e8-cc7ffaceab2e 2023-07-17 22:15:46,575 DEBUG [RS:0;jenkins-hbase4:46645] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 22:15:46,577 INFO [RS:1;jenkins-hbase4:41139] regionserver.HRegionServer(951): ClusterId : 56e44d19-98cd-47f5-a0e8-cc7ffaceab2e 2023-07-17 22:15:46,578 DEBUG [RS:1;jenkins-hbase4:41139] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 22:15:46,579 INFO [RS:2;jenkins-hbase4:44063] regionserver.HRegionServer(951): ClusterId : 56e44d19-98cd-47f5-a0e8-cc7ffaceab2e 2023-07-17 22:15:46,579 DEBUG [RS:2;jenkins-hbase4:44063] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 22:15:46,582 DEBUG [RS:0;jenkins-hbase4:46645] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 22:15:46,582 DEBUG [RS:0;jenkins-hbase4:46645] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 22:15:46,582 DEBUG [RS:1;jenkins-hbase4:41139] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 22:15:46,582 DEBUG [RS:1;jenkins-hbase4:41139] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 22:15:46,583 DEBUG [RS:2;jenkins-hbase4:44063] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 22:15:46,583 DEBUG [RS:2;jenkins-hbase4:44063] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 22:15:46,586 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:46,590 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-17 22:15:46,590 DEBUG [RS:1;jenkins-hbase4:41139] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 22:15:46,590 DEBUG [RS:0;jenkins-hbase4:46645] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 22:15:46,592 DEBUG [RS:0;jenkins-hbase4:46645] zookeeper.ReadOnlyZKClient(139): Connect 0x0018725e to 127.0.0.1:52793 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:46,592 DEBUG [RS:1;jenkins-hbase4:41139] zookeeper.ReadOnlyZKClient(139): Connect 0x67256f51 to 127.0.0.1:52793 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:46,592 DEBUG [RS:2;jenkins-hbase4:44063] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 22:15:46,596 DEBUG [RS:2;jenkins-hbase4:44063] zookeeper.ReadOnlyZKClient(139): Connect 0x010e42c0 to 127.0.0.1:52793 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:46,600 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/info 2023-07-17 22:15:46,603 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-17 22:15:46,604 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:46,604 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-17 22:15:46,605 DEBUG [RS:1;jenkins-hbase4:41139] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@69e28dc5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:46,605 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/rep_barrier 2023-07-17 22:15:46,605 DEBUG [RS:0;jenkins-hbase4:46645] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@edf9c2d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:46,606 DEBUG [RS:2;jenkins-hbase4:44063] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@bf629b0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:46,606 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-17 22:15:46,606 DEBUG [RS:0;jenkins-hbase4:46645] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4fe29616, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 22:15:46,606 DEBUG [RS:1;jenkins-hbase4:41139] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@28df87fe, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 22:15:46,606 DEBUG [RS:2;jenkins-hbase4:44063] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4ef4a6aa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 22:15:46,607 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:46,607 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-17 22:15:46,609 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/table 2023-07-17 22:15:46,609 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-17 22:15:46,610 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:46,611 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740 2023-07-17 22:15:46,611 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740 2023-07-17 22:15:46,614 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-17 22:15:46,616 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-17 22:15:46,619 DEBUG [RS:2;jenkins-hbase4:44063] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:44063 2023-07-17 22:15:46,619 DEBUG [RS:1;jenkins-hbase4:41139] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:41139 2023-07-17 22:15:46,619 INFO [RS:2;jenkins-hbase4:44063] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 22:15:46,619 INFO [RS:2;jenkins-hbase4:44063] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 22:15:46,619 INFO [RS:1;jenkins-hbase4:41139] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 22:15:46,619 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:46,619 DEBUG [RS:2;jenkins-hbase4:44063] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 22:15:46,619 INFO [RS:1;jenkins-hbase4:41139] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 22:15:46,619 DEBUG [RS:0;jenkins-hbase4:46645] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:46645 2023-07-17 22:15:46,619 DEBUG [RS:1;jenkins-hbase4:41139] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 22:15:46,619 INFO [RS:0;jenkins-hbase4:46645] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 22:15:46,619 INFO [RS:0;jenkins-hbase4:46645] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 22:15:46,619 DEBUG [RS:0;jenkins-hbase4:46645] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 22:15:46,620 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10526481600, jitterRate=-0.019645005464553833}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-17 22:15:46,620 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-17 22:15:46,620 INFO [RS:2;jenkins-hbase4:44063] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46299,1689632146159 with isa=jenkins-hbase4.apache.org/172.31.14.131:44063, startcode=1689632146327 2023-07-17 22:15:46,620 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-17 22:15:46,620 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-17 22:15:46,620 DEBUG [RS:2;jenkins-hbase4:44063] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 22:15:46,620 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-17 22:15:46,620 INFO [RS:0;jenkins-hbase4:46645] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46299,1689632146159 with isa=jenkins-hbase4.apache.org/172.31.14.131:46645, startcode=1689632146227 2023-07-17 22:15:46,620 INFO [RS:1;jenkins-hbase4:41139] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46299,1689632146159 with isa=jenkins-hbase4.apache.org/172.31.14.131:41139, startcode=1689632146271 2023-07-17 22:15:46,620 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-17 22:15:46,620 DEBUG [RS:1;jenkins-hbase4:41139] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 22:15:46,620 DEBUG [RS:0;jenkins-hbase4:46645] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 22:15:46,620 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-17 22:15:46,621 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-17 22:15:46,621 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-17 22:15:46,622 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39141, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 22:15:46,622 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38961, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 22:15:46,623 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45337, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 22:15:46,624 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-17 22:15:46,624 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-17 22:15:46,623 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46299] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44063,1689632146327 2023-07-17 22:15:46,624 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-17 22:15:46,624 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46299,1689632146159] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 22:15:46,624 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46299,1689632146159] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-17 22:15:46,624 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46299] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46645,1689632146227 2023-07-17 22:15:46,625 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46299,1689632146159] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 22:15:46,625 DEBUG [RS:2;jenkins-hbase4:44063] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500 2023-07-17 22:15:46,625 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46299,1689632146159] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-17 22:15:46,625 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46299] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41139,1689632146271 2023-07-17 22:15:46,625 DEBUG [RS:2;jenkins-hbase4:44063] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41705 2023-07-17 22:15:46,625 DEBUG [RS:0;jenkins-hbase4:46645] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500 2023-07-17 22:15:46,625 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46299,1689632146159] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 22:15:46,625 DEBUG [RS:1;jenkins-hbase4:41139] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500 2023-07-17 22:15:46,625 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46299,1689632146159] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-17 22:15:46,625 DEBUG [RS:0;jenkins-hbase4:46645] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41705 2023-07-17 22:15:46,625 DEBUG [RS:2;jenkins-hbase4:44063] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33435 2023-07-17 22:15:46,625 DEBUG [RS:0;jenkins-hbase4:46645] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33435 2023-07-17 22:15:46,625 DEBUG [RS:1;jenkins-hbase4:41139] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41705 2023-07-17 22:15:46,627 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-17 22:15:46,627 DEBUG [RS:1;jenkins-hbase4:41139] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33435 2023-07-17 22:15:46,628 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-17 22:15:46,633 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:46,634 DEBUG [RS:0;jenkins-hbase4:46645] zookeeper.ZKUtil(162): regionserver:46645-0x101755b05d70001, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46645,1689632146227 2023-07-17 22:15:46,634 DEBUG [RS:2;jenkins-hbase4:44063] zookeeper.ZKUtil(162): regionserver:44063-0x101755b05d70003, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44063,1689632146327 2023-07-17 22:15:46,634 WARN [RS:0;jenkins-hbase4:46645] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 22:15:46,634 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44063,1689632146327] 2023-07-17 22:15:46,634 DEBUG [RS:1;jenkins-hbase4:41139] zookeeper.ZKUtil(162): regionserver:41139-0x101755b05d70002, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41139,1689632146271 2023-07-17 22:15:46,634 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41139,1689632146271] 2023-07-17 22:15:46,634 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46645,1689632146227] 2023-07-17 22:15:46,634 INFO [RS:0;jenkins-hbase4:46645] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 22:15:46,634 WARN [RS:2;jenkins-hbase4:44063] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 22:15:46,634 WARN [RS:1;jenkins-hbase4:41139] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 22:15:46,635 INFO [RS:2;jenkins-hbase4:44063] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 22:15:46,635 INFO [RS:1;jenkins-hbase4:41139] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 22:15:46,634 DEBUG [RS:0;jenkins-hbase4:46645] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/WALs/jenkins-hbase4.apache.org,46645,1689632146227 2023-07-17 22:15:46,635 DEBUG [RS:1;jenkins-hbase4:41139] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/WALs/jenkins-hbase4.apache.org,41139,1689632146271 2023-07-17 22:15:46,635 DEBUG [RS:2;jenkins-hbase4:44063] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/WALs/jenkins-hbase4.apache.org,44063,1689632146327 2023-07-17 22:15:46,641 DEBUG [RS:0;jenkins-hbase4:46645] zookeeper.ZKUtil(162): regionserver:46645-0x101755b05d70001, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44063,1689632146327 2023-07-17 22:15:46,641 DEBUG [RS:2;jenkins-hbase4:44063] zookeeper.ZKUtil(162): regionserver:44063-0x101755b05d70003, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44063,1689632146327 2023-07-17 22:15:46,641 DEBUG [RS:1;jenkins-hbase4:41139] zookeeper.ZKUtil(162): regionserver:41139-0x101755b05d70002, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44063,1689632146327 2023-07-17 22:15:46,641 DEBUG [RS:0;jenkins-hbase4:46645] zookeeper.ZKUtil(162): regionserver:46645-0x101755b05d70001, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41139,1689632146271 2023-07-17 22:15:46,641 DEBUG [RS:2;jenkins-hbase4:44063] zookeeper.ZKUtil(162): regionserver:44063-0x101755b05d70003, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41139,1689632146271 2023-07-17 22:15:46,641 DEBUG [RS:1;jenkins-hbase4:41139] zookeeper.ZKUtil(162): regionserver:41139-0x101755b05d70002, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41139,1689632146271 2023-07-17 22:15:46,642 DEBUG [RS:0;jenkins-hbase4:46645] zookeeper.ZKUtil(162): regionserver:46645-0x101755b05d70001, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46645,1689632146227 2023-07-17 22:15:46,642 DEBUG [RS:2;jenkins-hbase4:44063] zookeeper.ZKUtil(162): regionserver:44063-0x101755b05d70003, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46645,1689632146227 2023-07-17 22:15:46,642 DEBUG [RS:1;jenkins-hbase4:41139] zookeeper.ZKUtil(162): regionserver:41139-0x101755b05d70002, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46645,1689632146227 2023-07-17 22:15:46,643 DEBUG [RS:2;jenkins-hbase4:44063] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 22:15:46,643 DEBUG [RS:0;jenkins-hbase4:46645] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 22:15:46,643 INFO [RS:2;jenkins-hbase4:44063] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 22:15:46,643 DEBUG [RS:1;jenkins-hbase4:41139] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 22:15:46,643 INFO [RS:0;jenkins-hbase4:46645] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 22:15:46,644 INFO [RS:2;jenkins-hbase4:44063] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 22:15:46,645 INFO [RS:2;jenkins-hbase4:44063] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 22:15:46,645 INFO [RS:2;jenkins-hbase4:44063] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,646 INFO [RS:1;jenkins-hbase4:41139] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 22:15:46,646 INFO [RS:2;jenkins-hbase4:44063] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 22:15:46,649 INFO [RS:1;jenkins-hbase4:41139] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 22:15:46,649 INFO [RS:0;jenkins-hbase4:46645] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 22:15:46,650 INFO [RS:1;jenkins-hbase4:41139] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 22:15:46,650 INFO [RS:1;jenkins-hbase4:41139] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,650 INFO [RS:0;jenkins-hbase4:46645] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 22:15:46,650 INFO [RS:1;jenkins-hbase4:41139] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 22:15:46,650 INFO [RS:0;jenkins-hbase4:46645] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,651 INFO [RS:0;jenkins-hbase4:46645] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 22:15:46,652 INFO [RS:2;jenkins-hbase4:44063] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,652 DEBUG [RS:2;jenkins-hbase4:44063] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,652 DEBUG [RS:2;jenkins-hbase4:44063] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,652 DEBUG [RS:2;jenkins-hbase4:44063] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,652 DEBUG [RS:2;jenkins-hbase4:44063] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,652 INFO [RS:1;jenkins-hbase4:41139] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,652 DEBUG [RS:2;jenkins-hbase4:44063] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,652 DEBUG [RS:1;jenkins-hbase4:41139] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,652 DEBUG [RS:2;jenkins-hbase4:44063] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 22:15:46,652 DEBUG [RS:1;jenkins-hbase4:41139] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,652 DEBUG [RS:2;jenkins-hbase4:44063] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,652 DEBUG [RS:1;jenkins-hbase4:41139] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,652 DEBUG [RS:2;jenkins-hbase4:44063] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,652 DEBUG [RS:1;jenkins-hbase4:41139] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,652 DEBUG [RS:2;jenkins-hbase4:44063] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,652 DEBUG [RS:1;jenkins-hbase4:41139] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,652 DEBUG [RS:2;jenkins-hbase4:44063] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,652 DEBUG [RS:1;jenkins-hbase4:41139] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 22:15:46,652 INFO [RS:0;jenkins-hbase4:46645] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,652 DEBUG [RS:1;jenkins-hbase4:41139] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,653 DEBUG [RS:1;jenkins-hbase4:41139] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,653 DEBUG [RS:0;jenkins-hbase4:46645] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,653 DEBUG [RS:0;jenkins-hbase4:46645] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,653 INFO [RS:2;jenkins-hbase4:44063] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,653 DEBUG [RS:0;jenkins-hbase4:46645] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,653 INFO [RS:2;jenkins-hbase4:44063] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,653 DEBUG [RS:0;jenkins-hbase4:46645] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,653 INFO [RS:2;jenkins-hbase4:44063] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,653 DEBUG [RS:0;jenkins-hbase4:46645] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,653 DEBUG [RS:1;jenkins-hbase4:41139] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,653 DEBUG [RS:0;jenkins-hbase4:46645] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 22:15:46,653 INFO [RS:2;jenkins-hbase4:44063] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,654 DEBUG [RS:0;jenkins-hbase4:46645] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,654 DEBUG [RS:1;jenkins-hbase4:41139] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,654 DEBUG [RS:0;jenkins-hbase4:46645] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,654 DEBUG [RS:0;jenkins-hbase4:46645] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,654 DEBUG [RS:0;jenkins-hbase4:46645] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:46,655 INFO [RS:0;jenkins-hbase4:46645] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,655 INFO [RS:0;jenkins-hbase4:46645] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,655 INFO [RS:0;jenkins-hbase4:46645] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,655 INFO [RS:1;jenkins-hbase4:41139] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,655 INFO [RS:0;jenkins-hbase4:46645] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,655 INFO [RS:1;jenkins-hbase4:41139] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,655 INFO [RS:1;jenkins-hbase4:41139] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,656 INFO [RS:1;jenkins-hbase4:41139] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,666 INFO [RS:1;jenkins-hbase4:41139] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 22:15:46,666 INFO [RS:2;jenkins-hbase4:44063] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 22:15:46,666 INFO [RS:1;jenkins-hbase4:41139] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41139,1689632146271-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,666 INFO [RS:2;jenkins-hbase4:44063] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44063,1689632146327-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,667 INFO [RS:0;jenkins-hbase4:46645] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 22:15:46,667 INFO [RS:0;jenkins-hbase4:46645] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46645,1689632146227-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,684 INFO [RS:0;jenkins-hbase4:46645] regionserver.Replication(203): jenkins-hbase4.apache.org,46645,1689632146227 started 2023-07-17 22:15:46,684 INFO [RS:0;jenkins-hbase4:46645] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46645,1689632146227, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46645, sessionid=0x101755b05d70001 2023-07-17 22:15:46,684 DEBUG [RS:0;jenkins-hbase4:46645] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 22:15:46,684 DEBUG [RS:0;jenkins-hbase4:46645] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46645,1689632146227 2023-07-17 22:15:46,684 DEBUG [RS:0;jenkins-hbase4:46645] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46645,1689632146227' 2023-07-17 22:15:46,684 DEBUG [RS:0;jenkins-hbase4:46645] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 22:15:46,685 DEBUG [RS:0;jenkins-hbase4:46645] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 22:15:46,685 INFO [RS:1;jenkins-hbase4:41139] regionserver.Replication(203): jenkins-hbase4.apache.org,41139,1689632146271 started 2023-07-17 22:15:46,685 INFO [RS:1;jenkins-hbase4:41139] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41139,1689632146271, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41139, sessionid=0x101755b05d70002 2023-07-17 22:15:46,685 DEBUG [RS:1;jenkins-hbase4:41139] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 22:15:46,685 DEBUG [RS:1;jenkins-hbase4:41139] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41139,1689632146271 2023-07-17 22:15:46,685 DEBUG [RS:0;jenkins-hbase4:46645] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 22:15:46,685 DEBUG [RS:0;jenkins-hbase4:46645] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 22:15:46,685 DEBUG [RS:0;jenkins-hbase4:46645] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46645,1689632146227 2023-07-17 22:15:46,685 INFO [RS:2;jenkins-hbase4:44063] regionserver.Replication(203): jenkins-hbase4.apache.org,44063,1689632146327 started 2023-07-17 22:15:46,685 DEBUG [RS:1;jenkins-hbase4:41139] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41139,1689632146271' 2023-07-17 22:15:46,685 INFO [RS:2;jenkins-hbase4:44063] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44063,1689632146327, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44063, sessionid=0x101755b05d70003 2023-07-17 22:15:46,685 DEBUG [RS:0;jenkins-hbase4:46645] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46645,1689632146227' 2023-07-17 22:15:46,685 DEBUG [RS:2;jenkins-hbase4:44063] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 22:15:46,685 DEBUG [RS:0;jenkins-hbase4:46645] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 22:15:46,685 DEBUG [RS:2;jenkins-hbase4:44063] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44063,1689632146327 2023-07-17 22:15:46,685 DEBUG [RS:2;jenkins-hbase4:44063] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44063,1689632146327' 2023-07-17 22:15:46,685 DEBUG [RS:2;jenkins-hbase4:44063] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 22:15:46,685 DEBUG [RS:1;jenkins-hbase4:41139] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 22:15:46,686 DEBUG [RS:1;jenkins-hbase4:41139] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 22:15:46,686 DEBUG [RS:1;jenkins-hbase4:41139] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 22:15:46,686 DEBUG [RS:1;jenkins-hbase4:41139] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 22:15:46,686 DEBUG [RS:1;jenkins-hbase4:41139] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41139,1689632146271 2023-07-17 22:15:46,686 DEBUG [RS:1;jenkins-hbase4:41139] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41139,1689632146271' 2023-07-17 22:15:46,686 DEBUG [RS:1;jenkins-hbase4:41139] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 22:15:46,687 DEBUG [RS:1;jenkins-hbase4:41139] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 22:15:46,687 DEBUG [RS:1;jenkins-hbase4:41139] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 22:15:46,687 INFO [RS:1;jenkins-hbase4:41139] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-17 22:15:46,690 INFO [RS:1;jenkins-hbase4:41139] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,690 DEBUG [RS:0;jenkins-hbase4:46645] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 22:15:46,690 DEBUG [RS:2;jenkins-hbase4:44063] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 22:15:46,690 DEBUG [RS:1;jenkins-hbase4:41139] zookeeper.ZKUtil(398): regionserver:41139-0x101755b05d70002, quorum=127.0.0.1:52793, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-17 22:15:46,690 INFO [RS:1;jenkins-hbase4:41139] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-17 22:15:46,690 DEBUG [RS:0;jenkins-hbase4:46645] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 22:15:46,690 INFO [RS:0;jenkins-hbase4:46645] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-17 22:15:46,690 DEBUG [RS:2;jenkins-hbase4:44063] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 22:15:46,691 DEBUG [RS:2;jenkins-hbase4:44063] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 22:15:46,690 INFO [RS:0;jenkins-hbase4:46645] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,691 DEBUG [RS:2;jenkins-hbase4:44063] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44063,1689632146327 2023-07-17 22:15:46,691 DEBUG [RS:2;jenkins-hbase4:44063] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44063,1689632146327' 2023-07-17 22:15:46,691 DEBUG [RS:2;jenkins-hbase4:44063] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 22:15:46,691 INFO [RS:1;jenkins-hbase4:41139] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,691 DEBUG [RS:0;jenkins-hbase4:46645] zookeeper.ZKUtil(398): regionserver:46645-0x101755b05d70001, quorum=127.0.0.1:52793, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-17 22:15:46,691 INFO [RS:0;jenkins-hbase4:46645] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-17 22:15:46,691 DEBUG [RS:2;jenkins-hbase4:44063] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 22:15:46,691 INFO [RS:0;jenkins-hbase4:46645] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,691 INFO [RS:1;jenkins-hbase4:41139] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,691 INFO [RS:0;jenkins-hbase4:46645] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,691 DEBUG [RS:2;jenkins-hbase4:44063] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 22:15:46,691 INFO [RS:2;jenkins-hbase4:44063] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-17 22:15:46,691 INFO [RS:2;jenkins-hbase4:44063] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,692 DEBUG [RS:2;jenkins-hbase4:44063] zookeeper.ZKUtil(398): regionserver:44063-0x101755b05d70003, quorum=127.0.0.1:52793, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-17 22:15:46,692 INFO [RS:2;jenkins-hbase4:44063] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-17 22:15:46,692 INFO [RS:2;jenkins-hbase4:44063] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,692 INFO [RS:2;jenkins-hbase4:44063] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:46,778 DEBUG [jenkins-hbase4:46299] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-17 22:15:46,779 DEBUG [jenkins-hbase4:46299] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:46,779 DEBUG [jenkins-hbase4:46299] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:46,779 DEBUG [jenkins-hbase4:46299] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:46,779 DEBUG [jenkins-hbase4:46299] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:46,779 DEBUG [jenkins-hbase4:46299] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:46,780 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41139,1689632146271, state=OPENING 2023-07-17 22:15:46,781 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-17 22:15:46,783 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:46,785 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41139,1689632146271}] 2023-07-17 22:15:46,785 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-17 22:15:46,795 INFO [RS:1;jenkins-hbase4:41139] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41139%2C1689632146271, suffix=, logDir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/WALs/jenkins-hbase4.apache.org,41139,1689632146271, archiveDir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/oldWALs, maxLogs=32 2023-07-17 22:15:46,795 INFO [RS:2;jenkins-hbase4:44063] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44063%2C1689632146327, suffix=, logDir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/WALs/jenkins-hbase4.apache.org,44063,1689632146327, archiveDir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/oldWALs, maxLogs=32 2023-07-17 22:15:46,795 INFO [RS:0;jenkins-hbase4:46645] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46645%2C1689632146227, suffix=, logDir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/WALs/jenkins-hbase4.apache.org,46645,1689632146227, archiveDir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/oldWALs, maxLogs=32 2023-07-17 22:15:46,815 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32853,DS-932f9eb1-46bd-46be-a1e6-5521082e7678,DISK] 2023-07-17 22:15:46,815 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42283,DS-629e67b8-6430-49a8-b40e-339bcc2ffd0d,DISK] 2023-07-17 22:15:46,824 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42811,DS-ad51ec4a-a1ac-4487-af9e-8a0056e48aae,DISK] 2023-07-17 22:15:46,825 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32853,DS-932f9eb1-46bd-46be-a1e6-5521082e7678,DISK] 2023-07-17 22:15:46,825 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42811,DS-ad51ec4a-a1ac-4487-af9e-8a0056e48aae,DISK] 2023-07-17 22:15:46,825 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42283,DS-629e67b8-6430-49a8-b40e-339bcc2ffd0d,DISK] 2023-07-17 22:15:46,827 WARN [ReadOnlyZKClient-127.0.0.1:52793@0x06470caf] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-17 22:15:46,828 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46299,1689632146159] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 22:15:46,833 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42811,DS-ad51ec4a-a1ac-4487-af9e-8a0056e48aae,DISK] 2023-07-17 22:15:46,833 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42283,DS-629e67b8-6430-49a8-b40e-339bcc2ffd0d,DISK] 2023-07-17 22:15:46,834 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32853,DS-932f9eb1-46bd-46be-a1e6-5521082e7678,DISK] 2023-07-17 22:15:46,836 INFO [RS:0;jenkins-hbase4:46645] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/WALs/jenkins-hbase4.apache.org,46645,1689632146227/jenkins-hbase4.apache.org%2C46645%2C1689632146227.1689632146799 2023-07-17 22:15:46,836 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56984, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 22:15:46,837 DEBUG [RS:0;jenkins-hbase4:46645] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42283,DS-629e67b8-6430-49a8-b40e-339bcc2ffd0d,DISK], DatanodeInfoWithStorage[127.0.0.1:32853,DS-932f9eb1-46bd-46be-a1e6-5521082e7678,DISK], DatanodeInfoWithStorage[127.0.0.1:42811,DS-ad51ec4a-a1ac-4487-af9e-8a0056e48aae,DISK]] 2023-07-17 22:15:46,837 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41139] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:56984 deadline: 1689632206836, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,41139,1689632146271 2023-07-17 22:15:46,839 INFO [RS:1;jenkins-hbase4:41139] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/WALs/jenkins-hbase4.apache.org,41139,1689632146271/jenkins-hbase4.apache.org%2C41139%2C1689632146271.1689632146798 2023-07-17 22:15:46,839 INFO [RS:2;jenkins-hbase4:44063] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/WALs/jenkins-hbase4.apache.org,44063,1689632146327/jenkins-hbase4.apache.org%2C44063%2C1689632146327.1689632146799 2023-07-17 22:15:46,840 DEBUG [RS:1;jenkins-hbase4:41139] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42811,DS-ad51ec4a-a1ac-4487-af9e-8a0056e48aae,DISK], DatanodeInfoWithStorage[127.0.0.1:32853,DS-932f9eb1-46bd-46be-a1e6-5521082e7678,DISK], DatanodeInfoWithStorage[127.0.0.1:42283,DS-629e67b8-6430-49a8-b40e-339bcc2ffd0d,DISK]] 2023-07-17 22:15:46,840 DEBUG [RS:2;jenkins-hbase4:44063] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32853,DS-932f9eb1-46bd-46be-a1e6-5521082e7678,DISK], DatanodeInfoWithStorage[127.0.0.1:42283,DS-629e67b8-6430-49a8-b40e-339bcc2ffd0d,DISK], DatanodeInfoWithStorage[127.0.0.1:42811,DS-ad51ec4a-a1ac-4487-af9e-8a0056e48aae,DISK]] 2023-07-17 22:15:46,939 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41139,1689632146271 2023-07-17 22:15:46,942 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 22:15:46,943 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56990, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 22:15:47,004 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-17 22:15:47,004 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 22:15:47,006 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41139%2C1689632146271.meta, suffix=.meta, logDir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/WALs/jenkins-hbase4.apache.org,41139,1689632146271, archiveDir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/oldWALs, maxLogs=32 2023-07-17 22:15:47,021 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32853,DS-932f9eb1-46bd-46be-a1e6-5521082e7678,DISK] 2023-07-17 22:15:47,021 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42811,DS-ad51ec4a-a1ac-4487-af9e-8a0056e48aae,DISK] 2023-07-17 22:15:47,021 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42283,DS-629e67b8-6430-49a8-b40e-339bcc2ffd0d,DISK] 2023-07-17 22:15:47,024 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/WALs/jenkins-hbase4.apache.org,41139,1689632146271/jenkins-hbase4.apache.org%2C41139%2C1689632146271.meta.1689632147007.meta 2023-07-17 22:15:47,025 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32853,DS-932f9eb1-46bd-46be-a1e6-5521082e7678,DISK], DatanodeInfoWithStorage[127.0.0.1:42811,DS-ad51ec4a-a1ac-4487-af9e-8a0056e48aae,DISK], DatanodeInfoWithStorage[127.0.0.1:42283,DS-629e67b8-6430-49a8-b40e-339bcc2ffd0d,DISK]] 2023-07-17 22:15:47,025 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:47,025 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-17 22:15:47,025 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-17 22:15:47,025 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-17 22:15:47,025 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-17 22:15:47,025 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:47,025 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-17 22:15:47,025 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-17 22:15:47,028 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-17 22:15:47,030 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/info 2023-07-17 22:15:47,030 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/info 2023-07-17 22:15:47,030 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-17 22:15:47,031 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:47,031 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-17 22:15:47,032 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/rep_barrier 2023-07-17 22:15:47,032 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/rep_barrier 2023-07-17 22:15:47,032 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-17 22:15:47,033 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:47,033 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-17 22:15:47,034 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/table 2023-07-17 22:15:47,034 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/table 2023-07-17 22:15:47,035 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-17 22:15:47,035 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:47,036 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740 2023-07-17 22:15:47,038 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740 2023-07-17 22:15:47,040 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-17 22:15:47,042 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-17 22:15:47,043 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9960307520, jitterRate=-0.07237407565116882}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-17 22:15:47,043 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-17 22:15:47,047 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689632146939 2023-07-17 22:15:47,051 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-17 22:15:47,052 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-17 22:15:47,052 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41139,1689632146271, state=OPEN 2023-07-17 22:15:47,053 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-17 22:15:47,054 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-17 22:15:47,055 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-17 22:15:47,055 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41139,1689632146271 in 271 msec 2023-07-17 22:15:47,057 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-17 22:15:47,057 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 431 msec 2023-07-17 22:15:47,058 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 544 msec 2023-07-17 22:15:47,058 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689632147058, completionTime=-1 2023-07-17 22:15:47,058 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-17 22:15:47,059 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-17 22:15:47,062 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-17 22:15:47,062 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689632207062 2023-07-17 22:15:47,062 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689632267062 2023-07-17 22:15:47,062 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-17 22:15:47,074 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46299,1689632146159-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:47,074 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46299,1689632146159-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:47,074 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46299,1689632146159-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:47,074 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:46299, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:47,074 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:47,074 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-17 22:15:47,074 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:47,075 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-17 22:15:47,076 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-17 22:15:47,077 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 22:15:47,077 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 22:15:47,079 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/hbase/namespace/11c0756bf0fbd526d9ce6d310df40bcb 2023-07-17 22:15:47,079 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/hbase/namespace/11c0756bf0fbd526d9ce6d310df40bcb empty. 2023-07-17 22:15:47,080 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/hbase/namespace/11c0756bf0fbd526d9ce6d310df40bcb 2023-07-17 22:15:47,080 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-17 22:15:47,097 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:47,098 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 11c0756bf0fbd526d9ce6d310df40bcb, NAME => 'hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp 2023-07-17 22:15:47,110 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:47,110 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 11c0756bf0fbd526d9ce6d310df40bcb, disabling compactions & flushes 2023-07-17 22:15:47,110 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb. 2023-07-17 22:15:47,110 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb. 2023-07-17 22:15:47,110 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb. after waiting 0 ms 2023-07-17 22:15:47,110 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb. 2023-07-17 22:15:47,110 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb. 2023-07-17 22:15:47,110 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 11c0756bf0fbd526d9ce6d310df40bcb: 2023-07-17 22:15:47,112 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 22:15:47,114 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689632147114"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632147114"}]},"ts":"1689632147114"} 2023-07-17 22:15:47,117 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 22:15:47,117 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 22:15:47,118 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632147118"}]},"ts":"1689632147118"} 2023-07-17 22:15:47,124 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-17 22:15:47,127 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:47,127 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:47,127 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:47,127 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:47,127 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:47,127 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=11c0756bf0fbd526d9ce6d310df40bcb, ASSIGN}] 2023-07-17 22:15:47,129 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=11c0756bf0fbd526d9ce6d310df40bcb, ASSIGN 2023-07-17 22:15:47,131 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=11c0756bf0fbd526d9ce6d310df40bcb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41139,1689632146271; forceNewPlan=false, retain=false 2023-07-17 22:15:47,140 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46299,1689632146159] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:47,142 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46299,1689632146159] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-17 22:15:47,143 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 22:15:47,144 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 22:15:47,145 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/hbase/rsgroup/edea253b94d278b5e20395d0aaaa9641 2023-07-17 22:15:47,146 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/hbase/rsgroup/edea253b94d278b5e20395d0aaaa9641 empty. 2023-07-17 22:15:47,146 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/hbase/rsgroup/edea253b94d278b5e20395d0aaaa9641 2023-07-17 22:15:47,146 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-17 22:15:47,160 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:47,162 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => edea253b94d278b5e20395d0aaaa9641, NAME => 'hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp 2023-07-17 22:15:47,192 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:47,193 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing edea253b94d278b5e20395d0aaaa9641, disabling compactions & flushes 2023-07-17 22:15:47,193 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641. 2023-07-17 22:15:47,193 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641. 2023-07-17 22:15:47,193 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641. after waiting 0 ms 2023-07-17 22:15:47,193 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641. 2023-07-17 22:15:47,193 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641. 2023-07-17 22:15:47,193 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for edea253b94d278b5e20395d0aaaa9641: 2023-07-17 22:15:47,195 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 22:15:47,196 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689632147196"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632147196"}]},"ts":"1689632147196"} 2023-07-17 22:15:47,197 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 22:15:47,198 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 22:15:47,198 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632147198"}]},"ts":"1689632147198"} 2023-07-17 22:15:47,199 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-17 22:15:47,204 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:47,204 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:47,204 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:47,204 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:47,204 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:47,204 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=edea253b94d278b5e20395d0aaaa9641, ASSIGN}] 2023-07-17 22:15:47,205 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=edea253b94d278b5e20395d0aaaa9641, ASSIGN 2023-07-17 22:15:47,206 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=edea253b94d278b5e20395d0aaaa9641, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44063,1689632146327; forceNewPlan=false, retain=false 2023-07-17 22:15:47,206 INFO [jenkins-hbase4:46299] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-17 22:15:47,208 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=11c0756bf0fbd526d9ce6d310df40bcb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41139,1689632146271 2023-07-17 22:15:47,208 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689632147208"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632147208"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632147208"}]},"ts":"1689632147208"} 2023-07-17 22:15:47,208 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=edea253b94d278b5e20395d0aaaa9641, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44063,1689632146327 2023-07-17 22:15:47,208 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689632147208"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632147208"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632147208"}]},"ts":"1689632147208"} 2023-07-17 22:15:47,210 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 11c0756bf0fbd526d9ce6d310df40bcb, server=jenkins-hbase4.apache.org,41139,1689632146271}] 2023-07-17 22:15:47,210 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure edea253b94d278b5e20395d0aaaa9641, server=jenkins-hbase4.apache.org,44063,1689632146327}] 2023-07-17 22:15:47,363 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44063,1689632146327 2023-07-17 22:15:47,364 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 22:15:47,365 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34358, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 22:15:47,366 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb. 2023-07-17 22:15:47,366 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 11c0756bf0fbd526d9ce6d310df40bcb, NAME => 'hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:47,366 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 11c0756bf0fbd526d9ce6d310df40bcb 2023-07-17 22:15:47,367 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:47,367 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 11c0756bf0fbd526d9ce6d310df40bcb 2023-07-17 22:15:47,367 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 11c0756bf0fbd526d9ce6d310df40bcb 2023-07-17 22:15:47,368 INFO [StoreOpener-11c0756bf0fbd526d9ce6d310df40bcb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 11c0756bf0fbd526d9ce6d310df40bcb 2023-07-17 22:15:47,369 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641. 2023-07-17 22:15:47,369 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => edea253b94d278b5e20395d0aaaa9641, NAME => 'hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:47,369 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-17 22:15:47,369 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641. service=MultiRowMutationService 2023-07-17 22:15:47,369 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-17 22:15:47,369 DEBUG [StoreOpener-11c0756bf0fbd526d9ce6d310df40bcb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/namespace/11c0756bf0fbd526d9ce6d310df40bcb/info 2023-07-17 22:15:47,369 DEBUG [StoreOpener-11c0756bf0fbd526d9ce6d310df40bcb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/namespace/11c0756bf0fbd526d9ce6d310df40bcb/info 2023-07-17 22:15:47,369 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup edea253b94d278b5e20395d0aaaa9641 2023-07-17 22:15:47,370 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:47,370 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for edea253b94d278b5e20395d0aaaa9641 2023-07-17 22:15:47,370 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for edea253b94d278b5e20395d0aaaa9641 2023-07-17 22:15:47,370 INFO [StoreOpener-11c0756bf0fbd526d9ce6d310df40bcb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 11c0756bf0fbd526d9ce6d310df40bcb columnFamilyName info 2023-07-17 22:15:47,370 INFO [StoreOpener-11c0756bf0fbd526d9ce6d310df40bcb-1] regionserver.HStore(310): Store=11c0756bf0fbd526d9ce6d310df40bcb/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:47,371 INFO [StoreOpener-edea253b94d278b5e20395d0aaaa9641-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region edea253b94d278b5e20395d0aaaa9641 2023-07-17 22:15:47,371 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/namespace/11c0756bf0fbd526d9ce6d310df40bcb 2023-07-17 22:15:47,371 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/namespace/11c0756bf0fbd526d9ce6d310df40bcb 2023-07-17 22:15:47,372 DEBUG [StoreOpener-edea253b94d278b5e20395d0aaaa9641-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/rsgroup/edea253b94d278b5e20395d0aaaa9641/m 2023-07-17 22:15:47,372 DEBUG [StoreOpener-edea253b94d278b5e20395d0aaaa9641-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/rsgroup/edea253b94d278b5e20395d0aaaa9641/m 2023-07-17 22:15:47,372 INFO [StoreOpener-edea253b94d278b5e20395d0aaaa9641-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region edea253b94d278b5e20395d0aaaa9641 columnFamilyName m 2023-07-17 22:15:47,373 INFO [StoreOpener-edea253b94d278b5e20395d0aaaa9641-1] regionserver.HStore(310): Store=edea253b94d278b5e20395d0aaaa9641/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:47,374 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/rsgroup/edea253b94d278b5e20395d0aaaa9641 2023-07-17 22:15:47,374 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/rsgroup/edea253b94d278b5e20395d0aaaa9641 2023-07-17 22:15:47,374 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 11c0756bf0fbd526d9ce6d310df40bcb 2023-07-17 22:15:47,377 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/namespace/11c0756bf0fbd526d9ce6d310df40bcb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:47,377 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for edea253b94d278b5e20395d0aaaa9641 2023-07-17 22:15:47,377 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 11c0756bf0fbd526d9ce6d310df40bcb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10613411680, jitterRate=-0.011549010872840881}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:47,377 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 11c0756bf0fbd526d9ce6d310df40bcb: 2023-07-17 22:15:47,378 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb., pid=8, masterSystemTime=1689632147361 2023-07-17 22:15:47,381 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/rsgroup/edea253b94d278b5e20395d0aaaa9641/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:47,382 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened edea253b94d278b5e20395d0aaaa9641; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@2f01d625, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:47,382 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for edea253b94d278b5e20395d0aaaa9641: 2023-07-17 22:15:47,382 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb. 2023-07-17 22:15:47,382 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb. 2023-07-17 22:15:47,383 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=11c0756bf0fbd526d9ce6d310df40bcb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41139,1689632146271 2023-07-17 22:15:47,383 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641., pid=9, masterSystemTime=1689632147363 2023-07-17 22:15:47,383 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689632147383"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632147383"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632147383"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632147383"}]},"ts":"1689632147383"} 2023-07-17 22:15:47,386 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641. 2023-07-17 22:15:47,387 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641. 2023-07-17 22:15:47,387 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=edea253b94d278b5e20395d0aaaa9641, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44063,1689632146327 2023-07-17 22:15:47,387 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689632147387"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632147387"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632147387"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632147387"}]},"ts":"1689632147387"} 2023-07-17 22:15:47,389 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-17 22:15:47,389 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 11c0756bf0fbd526d9ce6d310df40bcb, server=jenkins-hbase4.apache.org,41139,1689632146271 in 178 msec 2023-07-17 22:15:47,390 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-17 22:15:47,391 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=11c0756bf0fbd526d9ce6d310df40bcb, ASSIGN in 262 msec 2023-07-17 22:15:47,391 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 22:15:47,391 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632147391"}]},"ts":"1689632147391"} 2023-07-17 22:15:47,392 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-17 22:15:47,392 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-17 22:15:47,393 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure edea253b94d278b5e20395d0aaaa9641, server=jenkins-hbase4.apache.org,44063,1689632146327 in 181 msec 2023-07-17 22:15:47,394 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-17 22:15:47,394 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=edea253b94d278b5e20395d0aaaa9641, ASSIGN in 189 msec 2023-07-17 22:15:47,395 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 22:15:47,395 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632147395"}]},"ts":"1689632147395"} 2023-07-17 22:15:47,395 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 22:15:47,396 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-17 22:15:47,396 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 321 msec 2023-07-17 22:15:47,398 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 22:15:47,399 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 258 msec 2023-07-17 22:15:47,446 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46299,1689632146159] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 22:15:47,448 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34366, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 22:15:47,451 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46299,1689632146159] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-17 22:15:47,451 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46299,1689632146159] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-17 22:15:47,462 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:47,462 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46299,1689632146159] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:47,464 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46299,1689632146159] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-17 22:15:47,465 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46299,1689632146159] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-17 22:15:47,476 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-17 22:15:47,477 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-17 22:15:47,477 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:47,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-17 22:15:47,491 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 22:15:47,494 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-07-17 22:15:47,505 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-17 22:15:47,518 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 22:15:47,522 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 16 msec 2023-07-17 22:15:47,530 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-17 22:15:47,537 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-17 22:15:47,537 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.160sec 2023-07-17 22:15:47,537 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-17 22:15:47,537 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:47,538 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-17 22:15:47,538 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-17 22:15:47,540 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 22:15:47,541 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 22:15:47,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-17 22:15:47,542 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/hbase/quota/8dc45a7a1cbcdeef535b03350181395e 2023-07-17 22:15:47,543 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/hbase/quota/8dc45a7a1cbcdeef535b03350181395e empty. 2023-07-17 22:15:47,543 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/hbase/quota/8dc45a7a1cbcdeef535b03350181395e 2023-07-17 22:15:47,543 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-17 22:15:47,548 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-17 22:15:47,548 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-17 22:15:47,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:47,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:47,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-17 22:15:47,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-17 22:15:47,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46299,1689632146159-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-17 22:15:47,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46299,1689632146159-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-17 22:15:47,553 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-17 22:15:47,563 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:47,565 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8dc45a7a1cbcdeef535b03350181395e, NAME => 'hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp 2023-07-17 22:15:47,579 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:47,580 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 8dc45a7a1cbcdeef535b03350181395e, disabling compactions & flushes 2023-07-17 22:15:47,580 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e. 2023-07-17 22:15:47,580 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e. 2023-07-17 22:15:47,580 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e. after waiting 0 ms 2023-07-17 22:15:47,580 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e. 2023-07-17 22:15:47,580 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e. 2023-07-17 22:15:47,580 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 8dc45a7a1cbcdeef535b03350181395e: 2023-07-17 22:15:47,582 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 22:15:47,583 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689632147583"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632147583"}]},"ts":"1689632147583"} 2023-07-17 22:15:47,584 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 22:15:47,585 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 22:15:47,585 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632147585"}]},"ts":"1689632147585"} 2023-07-17 22:15:47,586 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-17 22:15:47,590 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:47,590 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:47,590 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:47,590 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:47,590 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:47,590 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=8dc45a7a1cbcdeef535b03350181395e, ASSIGN}] 2023-07-17 22:15:47,591 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=8dc45a7a1cbcdeef535b03350181395e, ASSIGN 2023-07-17 22:15:47,592 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=8dc45a7a1cbcdeef535b03350181395e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41139,1689632146271; forceNewPlan=false, retain=false 2023-07-17 22:15:47,597 DEBUG [Listener at localhost/42151] zookeeper.ReadOnlyZKClient(139): Connect 0x2eace996 to 127.0.0.1:52793 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:47,603 DEBUG [Listener at localhost/42151] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@65b4de14, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:47,605 DEBUG [hconnection-0x7f820e5e-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 22:15:47,608 INFO [RS-EventLoopGroup-10-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57002, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 22:15:47,610 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,46299,1689632146159 2023-07-17 22:15:47,610 INFO [Listener at localhost/42151] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:47,613 DEBUG [Listener at localhost/42151] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-17 22:15:47,614 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54726, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-17 22:15:47,617 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-17 22:15:47,617 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:47,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-17 22:15:47,618 DEBUG [Listener at localhost/42151] zookeeper.ReadOnlyZKClient(139): Connect 0x41f5b42e to 127.0.0.1:52793 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:47,623 DEBUG [Listener at localhost/42151] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2e6aa2b0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:47,624 INFO [Listener at localhost/42151] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:52793 2023-07-17 22:15:47,627 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 22:15:47,628 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101755b05d7000a connected 2023-07-17 22:15:47,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-17 22:15:47,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-17 22:15:47,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-17 22:15:47,646 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 22:15:47,649 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 17 msec 2023-07-17 22:15:47,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-17 22:15:47,742 INFO [jenkins-hbase4:46299] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 22:15:47,743 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=8dc45a7a1cbcdeef535b03350181395e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41139,1689632146271 2023-07-17 22:15:47,743 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689632147743"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632147743"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632147743"}]},"ts":"1689632147743"} 2023-07-17 22:15:47,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:47,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-17 22:15:47,745 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 8dc45a7a1cbcdeef535b03350181395e, server=jenkins-hbase4.apache.org,41139,1689632146271}] 2023-07-17 22:15:47,746 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 22:15:47,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-17 22:15:47,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-17 22:15:47,749 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:47,749 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-17 22:15:47,752 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 22:15:47,753 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/np1/table1/3688a5e204fe1df3766abdef598cdac1 2023-07-17 22:15:47,754 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/np1/table1/3688a5e204fe1df3766abdef598cdac1 empty. 2023-07-17 22:15:47,754 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/np1/table1/3688a5e204fe1df3766abdef598cdac1 2023-07-17 22:15:47,754 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-17 22:15:47,767 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:47,768 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3688a5e204fe1df3766abdef598cdac1, NAME => 'np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp 2023-07-17 22:15:47,776 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:47,776 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 3688a5e204fe1df3766abdef598cdac1, disabling compactions & flushes 2023-07-17 22:15:47,777 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1. 2023-07-17 22:15:47,777 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1. 2023-07-17 22:15:47,777 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1. after waiting 0 ms 2023-07-17 22:15:47,777 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1. 2023-07-17 22:15:47,777 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1. 2023-07-17 22:15:47,777 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 3688a5e204fe1df3766abdef598cdac1: 2023-07-17 22:15:47,779 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 22:15:47,780 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689632147779"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632147779"}]},"ts":"1689632147779"} 2023-07-17 22:15:47,781 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 22:15:47,782 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 22:15:47,782 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632147782"}]},"ts":"1689632147782"} 2023-07-17 22:15:47,783 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-17 22:15:47,787 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:47,787 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:47,787 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:47,787 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:47,787 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:47,787 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=3688a5e204fe1df3766abdef598cdac1, ASSIGN}] 2023-07-17 22:15:47,788 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=3688a5e204fe1df3766abdef598cdac1, ASSIGN 2023-07-17 22:15:47,789 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=3688a5e204fe1df3766abdef598cdac1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44063,1689632146327; forceNewPlan=false, retain=false 2023-07-17 22:15:47,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-17 22:15:47,901 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e. 2023-07-17 22:15:47,901 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8dc45a7a1cbcdeef535b03350181395e, NAME => 'hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:47,901 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 8dc45a7a1cbcdeef535b03350181395e 2023-07-17 22:15:47,901 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:47,901 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8dc45a7a1cbcdeef535b03350181395e 2023-07-17 22:15:47,901 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8dc45a7a1cbcdeef535b03350181395e 2023-07-17 22:15:47,902 INFO [StoreOpener-8dc45a7a1cbcdeef535b03350181395e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 8dc45a7a1cbcdeef535b03350181395e 2023-07-17 22:15:47,904 DEBUG [StoreOpener-8dc45a7a1cbcdeef535b03350181395e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/quota/8dc45a7a1cbcdeef535b03350181395e/q 2023-07-17 22:15:47,904 DEBUG [StoreOpener-8dc45a7a1cbcdeef535b03350181395e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/quota/8dc45a7a1cbcdeef535b03350181395e/q 2023-07-17 22:15:47,904 INFO [StoreOpener-8dc45a7a1cbcdeef535b03350181395e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8dc45a7a1cbcdeef535b03350181395e columnFamilyName q 2023-07-17 22:15:47,904 INFO [StoreOpener-8dc45a7a1cbcdeef535b03350181395e-1] regionserver.HStore(310): Store=8dc45a7a1cbcdeef535b03350181395e/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:47,904 INFO [StoreOpener-8dc45a7a1cbcdeef535b03350181395e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 8dc45a7a1cbcdeef535b03350181395e 2023-07-17 22:15:47,906 DEBUG [StoreOpener-8dc45a7a1cbcdeef535b03350181395e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/quota/8dc45a7a1cbcdeef535b03350181395e/u 2023-07-17 22:15:47,906 DEBUG [StoreOpener-8dc45a7a1cbcdeef535b03350181395e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/quota/8dc45a7a1cbcdeef535b03350181395e/u 2023-07-17 22:15:47,906 INFO [StoreOpener-8dc45a7a1cbcdeef535b03350181395e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8dc45a7a1cbcdeef535b03350181395e columnFamilyName u 2023-07-17 22:15:47,906 INFO [StoreOpener-8dc45a7a1cbcdeef535b03350181395e-1] regionserver.HStore(310): Store=8dc45a7a1cbcdeef535b03350181395e/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:47,907 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/quota/8dc45a7a1cbcdeef535b03350181395e 2023-07-17 22:15:47,908 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/quota/8dc45a7a1cbcdeef535b03350181395e 2023-07-17 22:15:47,910 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-17 22:15:47,911 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8dc45a7a1cbcdeef535b03350181395e 2023-07-17 22:15:47,913 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/quota/8dc45a7a1cbcdeef535b03350181395e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:47,914 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8dc45a7a1cbcdeef535b03350181395e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9990782400, jitterRate=-0.06953588128089905}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-17 22:15:47,914 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8dc45a7a1cbcdeef535b03350181395e: 2023-07-17 22:15:47,915 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e., pid=16, masterSystemTime=1689632147897 2023-07-17 22:15:47,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e. 2023-07-17 22:15:47,916 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e. 2023-07-17 22:15:47,916 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=8dc45a7a1cbcdeef535b03350181395e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41139,1689632146271 2023-07-17 22:15:47,917 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689632147916"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632147916"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632147916"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632147916"}]},"ts":"1689632147916"} 2023-07-17 22:15:47,919 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-17 22:15:47,919 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 8dc45a7a1cbcdeef535b03350181395e, server=jenkins-hbase4.apache.org,41139,1689632146271 in 173 msec 2023-07-17 22:15:47,921 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-17 22:15:47,921 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=8dc45a7a1cbcdeef535b03350181395e, ASSIGN in 329 msec 2023-07-17 22:15:47,921 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 22:15:47,921 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632147921"}]},"ts":"1689632147921"} 2023-07-17 22:15:47,922 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-17 22:15:47,925 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 22:15:47,926 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 388 msec 2023-07-17 22:15:47,939 INFO [jenkins-hbase4:46299] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 22:15:47,940 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=3688a5e204fe1df3766abdef598cdac1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44063,1689632146327 2023-07-17 22:15:47,940 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689632147940"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632147940"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632147940"}]},"ts":"1689632147940"} 2023-07-17 22:15:47,942 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 3688a5e204fe1df3766abdef598cdac1, server=jenkins-hbase4.apache.org,44063,1689632146327}] 2023-07-17 22:15:48,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-17 22:15:48,097 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1. 2023-07-17 22:15:48,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3688a5e204fe1df3766abdef598cdac1, NAME => 'np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:48,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 3688a5e204fe1df3766abdef598cdac1 2023-07-17 22:15:48,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:48,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3688a5e204fe1df3766abdef598cdac1 2023-07-17 22:15:48,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3688a5e204fe1df3766abdef598cdac1 2023-07-17 22:15:48,098 INFO [StoreOpener-3688a5e204fe1df3766abdef598cdac1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 3688a5e204fe1df3766abdef598cdac1 2023-07-17 22:15:48,100 DEBUG [StoreOpener-3688a5e204fe1df3766abdef598cdac1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/np1/table1/3688a5e204fe1df3766abdef598cdac1/fam1 2023-07-17 22:15:48,100 DEBUG [StoreOpener-3688a5e204fe1df3766abdef598cdac1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/np1/table1/3688a5e204fe1df3766abdef598cdac1/fam1 2023-07-17 22:15:48,100 INFO [StoreOpener-3688a5e204fe1df3766abdef598cdac1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3688a5e204fe1df3766abdef598cdac1 columnFamilyName fam1 2023-07-17 22:15:48,101 INFO [StoreOpener-3688a5e204fe1df3766abdef598cdac1-1] regionserver.HStore(310): Store=3688a5e204fe1df3766abdef598cdac1/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:48,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/np1/table1/3688a5e204fe1df3766abdef598cdac1 2023-07-17 22:15:48,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/np1/table1/3688a5e204fe1df3766abdef598cdac1 2023-07-17 22:15:48,106 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3688a5e204fe1df3766abdef598cdac1 2023-07-17 22:15:48,109 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/np1/table1/3688a5e204fe1df3766abdef598cdac1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:48,109 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3688a5e204fe1df3766abdef598cdac1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10989620960, jitterRate=0.023488208651542664}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:48,109 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3688a5e204fe1df3766abdef598cdac1: 2023-07-17 22:15:48,110 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1., pid=18, masterSystemTime=1689632148093 2023-07-17 22:15:48,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1. 2023-07-17 22:15:48,112 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1. 2023-07-17 22:15:48,112 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=3688a5e204fe1df3766abdef598cdac1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44063,1689632146327 2023-07-17 22:15:48,113 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689632148112"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632148112"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632148112"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632148112"}]},"ts":"1689632148112"} 2023-07-17 22:15:48,116 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-17 22:15:48,117 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 3688a5e204fe1df3766abdef598cdac1, server=jenkins-hbase4.apache.org,44063,1689632146327 in 173 msec 2023-07-17 22:15:48,118 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-17 22:15:48,118 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=3688a5e204fe1df3766abdef598cdac1, ASSIGN in 329 msec 2023-07-17 22:15:48,118 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 22:15:48,119 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632148119"}]},"ts":"1689632148119"} 2023-07-17 22:15:48,120 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-17 22:15:48,122 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 22:15:48,123 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 378 msec 2023-07-17 22:15:48,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-17 22:15:48,351 INFO [Listener at localhost/42151] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-17 22:15:48,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:48,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-17 22:15:48,356 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 22:15:48,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-17 22:15:48,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-17 22:15:48,377 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=23 msec 2023-07-17 22:15:48,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-17 22:15:48,460 INFO [Listener at localhost/42151] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-17 22:15:48,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:48,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:48,462 INFO [Listener at localhost/42151] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-17 22:15:48,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-17 22:15:48,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-17 22:15:48,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-17 22:15:48,466 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632148466"}]},"ts":"1689632148466"} 2023-07-17 22:15:48,467 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-17 22:15:48,468 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-17 22:15:48,469 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=3688a5e204fe1df3766abdef598cdac1, UNASSIGN}] 2023-07-17 22:15:48,470 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=3688a5e204fe1df3766abdef598cdac1, UNASSIGN 2023-07-17 22:15:48,470 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=3688a5e204fe1df3766abdef598cdac1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44063,1689632146327 2023-07-17 22:15:48,470 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689632148470"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632148470"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632148470"}]},"ts":"1689632148470"} 2023-07-17 22:15:48,471 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 3688a5e204fe1df3766abdef598cdac1, server=jenkins-hbase4.apache.org,44063,1689632146327}] 2023-07-17 22:15:48,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-17 22:15:48,623 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3688a5e204fe1df3766abdef598cdac1 2023-07-17 22:15:48,625 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3688a5e204fe1df3766abdef598cdac1, disabling compactions & flushes 2023-07-17 22:15:48,625 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1. 2023-07-17 22:15:48,625 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1. 2023-07-17 22:15:48,625 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1. after waiting 0 ms 2023-07-17 22:15:48,625 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1. 2023-07-17 22:15:48,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/np1/table1/3688a5e204fe1df3766abdef598cdac1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:48,629 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1. 2023-07-17 22:15:48,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3688a5e204fe1df3766abdef598cdac1: 2023-07-17 22:15:48,631 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3688a5e204fe1df3766abdef598cdac1 2023-07-17 22:15:48,631 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=3688a5e204fe1df3766abdef598cdac1, regionState=CLOSED 2023-07-17 22:15:48,631 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689632148631"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632148631"}]},"ts":"1689632148631"} 2023-07-17 22:15:48,634 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-17 22:15:48,634 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 3688a5e204fe1df3766abdef598cdac1, server=jenkins-hbase4.apache.org,44063,1689632146327 in 161 msec 2023-07-17 22:15:48,635 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-17 22:15:48,635 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=3688a5e204fe1df3766abdef598cdac1, UNASSIGN in 165 msec 2023-07-17 22:15:48,636 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632148636"}]},"ts":"1689632148636"} 2023-07-17 22:15:48,637 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-17 22:15:48,640 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-17 22:15:48,642 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 177 msec 2023-07-17 22:15:48,707 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-17 22:15:48,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-17 22:15:48,768 INFO [Listener at localhost/42151] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-17 22:15:48,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-17 22:15:48,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-17 22:15:48,772 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-17 22:15:48,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-17 22:15:48,772 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-17 22:15:48,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:48,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-17 22:15:48,776 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/np1/table1/3688a5e204fe1df3766abdef598cdac1 2023-07-17 22:15:48,778 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/np1/table1/3688a5e204fe1df3766abdef598cdac1/fam1, FileablePath, hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/np1/table1/3688a5e204fe1df3766abdef598cdac1/recovered.edits] 2023-07-17 22:15:48,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-17 22:15:48,788 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/np1/table1/3688a5e204fe1df3766abdef598cdac1/recovered.edits/4.seqid to hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/archive/data/np1/table1/3688a5e204fe1df3766abdef598cdac1/recovered.edits/4.seqid 2023-07-17 22:15:48,789 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/.tmp/data/np1/table1/3688a5e204fe1df3766abdef598cdac1 2023-07-17 22:15:48,789 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-17 22:15:48,791 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-17 22:15:48,792 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-17 22:15:48,794 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-17 22:15:48,796 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-17 22:15:48,796 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-17 22:15:48,796 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632148796"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:48,797 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-17 22:15:48,797 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 3688a5e204fe1df3766abdef598cdac1, NAME => 'np1:table1,,1689632147743.3688a5e204fe1df3766abdef598cdac1.', STARTKEY => '', ENDKEY => ''}] 2023-07-17 22:15:48,797 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-17 22:15:48,797 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689632148797"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:48,799 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-17 22:15:48,802 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-17 22:15:48,803 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 33 msec 2023-07-17 22:15:48,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-17 22:15:48,885 INFO [Listener at localhost/42151] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-17 22:15:48,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-17 22:15:48,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-17 22:15:48,898 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-17 22:15:48,901 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-17 22:15:48,903 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-17 22:15:48,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-17 22:15:48,904 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-17 22:15:48,905 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 22:15:48,905 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-17 22:15:48,907 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-17 22:15:48,908 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 17 msec 2023-07-17 22:15:49,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46299] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-17 22:15:49,005 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-17 22:15:49,005 INFO [Listener at localhost/42151] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-17 22:15:49,005 DEBUG [Listener at localhost/42151] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2eace996 to 127.0.0.1:52793 2023-07-17 22:15:49,005 DEBUG [Listener at localhost/42151] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:49,005 DEBUG [Listener at localhost/42151] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-17 22:15:49,006 DEBUG [Listener at localhost/42151] util.JVMClusterUtil(257): Found active master hash=1680690158, stopped=false 2023-07-17 22:15:49,006 DEBUG [Listener at localhost/42151] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-17 22:15:49,006 DEBUG [Listener at localhost/42151] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-17 22:15:49,006 DEBUG [Listener at localhost/42151] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-17 22:15:49,006 INFO [Listener at localhost/42151] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,46299,1689632146159 2023-07-17 22:15:49,007 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:44063-0x101755b05d70003, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:49,007 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:41139-0x101755b05d70002, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:49,007 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:46645-0x101755b05d70001, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:49,007 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:49,008 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:49,008 INFO [Listener at localhost/42151] procedure2.ProcedureExecutor(629): Stopping 2023-07-17 22:15:49,008 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41139-0x101755b05d70002, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:49,008 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:49,009 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46645-0x101755b05d70001, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:49,010 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44063-0x101755b05d70003, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:49,011 DEBUG [Listener at localhost/42151] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x06470caf to 127.0.0.1:52793 2023-07-17 22:15:49,011 DEBUG [Listener at localhost/42151] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:49,011 INFO [Listener at localhost/42151] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46645,1689632146227' ***** 2023-07-17 22:15:49,011 INFO [Listener at localhost/42151] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 22:15:49,011 INFO [Listener at localhost/42151] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41139,1689632146271' ***** 2023-07-17 22:15:49,011 INFO [Listener at localhost/42151] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 22:15:49,011 INFO [Listener at localhost/42151] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44063,1689632146327' ***** 2023-07-17 22:15:49,012 INFO [Listener at localhost/42151] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 22:15:49,011 INFO [RS:0;jenkins-hbase4:46645] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 22:15:49,011 INFO [RS:1;jenkins-hbase4:41139] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 22:15:49,013 INFO [RS:2;jenkins-hbase4:44063] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 22:15:49,026 INFO [RS:1;jenkins-hbase4:41139] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1f9228ec{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:49,026 INFO [RS:2;jenkins-hbase4:44063] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4e7f4242{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:49,026 INFO [RS:0;jenkins-hbase4:46645] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@75d7c124{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:49,027 INFO [RS:1;jenkins-hbase4:41139] server.AbstractConnector(383): Stopped ServerConnector@63475078{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 22:15:49,027 INFO [RS:2;jenkins-hbase4:44063] server.AbstractConnector(383): Stopped ServerConnector@2c408b4b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 22:15:49,027 INFO [RS:1;jenkins-hbase4:41139] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 22:15:49,027 INFO [RS:2;jenkins-hbase4:44063] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 22:15:49,027 INFO [RS:0;jenkins-hbase4:46645] server.AbstractConnector(383): Stopped ServerConnector@10a92bd3{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 22:15:49,028 INFO [RS:1;jenkins-hbase4:41139] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2d1cc2f3{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 22:15:49,029 INFO [RS:2;jenkins-hbase4:44063] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1b8bc652{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 22:15:49,030 INFO [RS:1;jenkins-hbase4:41139] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@885bf81{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/hadoop.log.dir/,STOPPED} 2023-07-17 22:15:49,029 INFO [RS:0;jenkins-hbase4:46645] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 22:15:49,030 INFO [RS:2;jenkins-hbase4:44063] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@72d7b0a9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/hadoop.log.dir/,STOPPED} 2023-07-17 22:15:49,030 INFO [RS:0;jenkins-hbase4:46645] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5033e558{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 22:15:49,030 INFO [RS:0;jenkins-hbase4:46645] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4028bb16{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/hadoop.log.dir/,STOPPED} 2023-07-17 22:15:49,030 INFO [RS:1;jenkins-hbase4:41139] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 22:15:49,030 INFO [RS:1;jenkins-hbase4:41139] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 22:15:49,030 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 22:15:49,030 INFO [RS:1;jenkins-hbase4:41139] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 22:15:49,031 INFO [RS:1;jenkins-hbase4:41139] regionserver.HRegionServer(3305): Received CLOSE for 11c0756bf0fbd526d9ce6d310df40bcb 2023-07-17 22:15:49,032 INFO [RS:1;jenkins-hbase4:41139] regionserver.HRegionServer(3305): Received CLOSE for 8dc45a7a1cbcdeef535b03350181395e 2023-07-17 22:15:49,032 INFO [RS:0;jenkins-hbase4:46645] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 22:15:49,032 INFO [RS:0;jenkins-hbase4:46645] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 22:15:49,032 INFO [RS:0;jenkins-hbase4:46645] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 22:15:49,032 INFO [RS:0;jenkins-hbase4:46645] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46645,1689632146227 2023-07-17 22:15:49,033 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 11c0756bf0fbd526d9ce6d310df40bcb, disabling compactions & flushes 2023-07-17 22:15:49,032 INFO [RS:1;jenkins-hbase4:41139] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41139,1689632146271 2023-07-17 22:15:49,033 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb. 2023-07-17 22:15:49,033 DEBUG [RS:0;jenkins-hbase4:46645] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0018725e to 127.0.0.1:52793 2023-07-17 22:15:49,032 INFO [RS:2;jenkins-hbase4:44063] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 22:15:49,032 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 22:15:49,033 INFO [RS:2;jenkins-hbase4:44063] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 22:15:49,033 DEBUG [RS:0;jenkins-hbase4:46645] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:49,033 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb. 2023-07-17 22:15:49,033 DEBUG [RS:1;jenkins-hbase4:41139] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x67256f51 to 127.0.0.1:52793 2023-07-17 22:15:49,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb. after waiting 0 ms 2023-07-17 22:15:49,034 INFO [RS:0;jenkins-hbase4:46645] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46645,1689632146227; all regions closed. 2023-07-17 22:15:49,034 INFO [RS:2;jenkins-hbase4:44063] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 22:15:49,033 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 22:15:49,035 INFO [RS:2;jenkins-hbase4:44063] regionserver.HRegionServer(3305): Received CLOSE for edea253b94d278b5e20395d0aaaa9641 2023-07-17 22:15:49,035 DEBUG [RS:0;jenkins-hbase4:46645] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-17 22:15:49,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb. 2023-07-17 22:15:49,036 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing edea253b94d278b5e20395d0aaaa9641, disabling compactions & flushes 2023-07-17 22:15:49,036 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 11c0756bf0fbd526d9ce6d310df40bcb 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-17 22:15:49,034 DEBUG [RS:1;jenkins-hbase4:41139] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:49,036 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641. 2023-07-17 22:15:49,036 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641. 2023-07-17 22:15:49,036 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641. after waiting 0 ms 2023-07-17 22:15:49,036 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641. 2023-07-17 22:15:49,036 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing edea253b94d278b5e20395d0aaaa9641 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-17 22:15:49,035 INFO [RS:2;jenkins-hbase4:44063] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44063,1689632146327 2023-07-17 22:15:49,039 DEBUG [RS:2;jenkins-hbase4:44063] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x010e42c0 to 127.0.0.1:52793 2023-07-17 22:15:49,039 DEBUG [RS:2;jenkins-hbase4:44063] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:49,039 INFO [RS:2;jenkins-hbase4:44063] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-17 22:15:49,039 DEBUG [RS:2;jenkins-hbase4:44063] regionserver.HRegionServer(1478): Online Regions={edea253b94d278b5e20395d0aaaa9641=hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641.} 2023-07-17 22:15:49,039 DEBUG [RS:2;jenkins-hbase4:44063] regionserver.HRegionServer(1504): Waiting on edea253b94d278b5e20395d0aaaa9641 2023-07-17 22:15:49,036 INFO [RS:1;jenkins-hbase4:41139] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 22:15:49,039 INFO [RS:1;jenkins-hbase4:41139] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 22:15:49,039 INFO [RS:1;jenkins-hbase4:41139] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 22:15:49,039 INFO [RS:1;jenkins-hbase4:41139] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-17 22:15:49,040 INFO [RS:1;jenkins-hbase4:41139] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-17 22:15:49,040 DEBUG [RS:1;jenkins-hbase4:41139] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 11c0756bf0fbd526d9ce6d310df40bcb=hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb., 8dc45a7a1cbcdeef535b03350181395e=hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e.} 2023-07-17 22:15:49,042 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-17 22:15:49,042 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-17 22:15:49,042 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-17 22:15:49,042 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-17 22:15:49,042 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-17 22:15:49,042 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-17 22:15:49,043 DEBUG [RS:1;jenkins-hbase4:41139] regionserver.HRegionServer(1504): Waiting on 11c0756bf0fbd526d9ce6d310df40bcb, 1588230740, 8dc45a7a1cbcdeef535b03350181395e 2023-07-17 22:15:49,051 DEBUG [RS:0;jenkins-hbase4:46645] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/oldWALs 2023-07-17 22:15:49,051 INFO [RS:0;jenkins-hbase4:46645] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46645%2C1689632146227:(num 1689632146799) 2023-07-17 22:15:49,051 DEBUG [RS:0;jenkins-hbase4:46645] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:49,051 INFO [RS:0;jenkins-hbase4:46645] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:49,055 INFO [RS:0;jenkins-hbase4:46645] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-17 22:15:49,055 INFO [RS:0;jenkins-hbase4:46645] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 22:15:49,055 INFO [RS:0;jenkins-hbase4:46645] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 22:15:49,055 INFO [RS:0;jenkins-hbase4:46645] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 22:15:49,056 INFO [RS:0;jenkins-hbase4:46645] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46645 2023-07-17 22:15:49,056 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 22:15:49,079 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/namespace/11c0756bf0fbd526d9ce6d310df40bcb/.tmp/info/5e875a53825945b4bb694bcad5ca760e 2023-07-17 22:15:49,083 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/rsgroup/edea253b94d278b5e20395d0aaaa9641/.tmp/m/927e5beac14b47fea5877ff16c30014d 2023-07-17 22:15:49,083 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/.tmp/info/7480d5be746d4d438d0582804137fd27 2023-07-17 22:15:49,087 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5e875a53825945b4bb694bcad5ca760e 2023-07-17 22:15:49,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/namespace/11c0756bf0fbd526d9ce6d310df40bcb/.tmp/info/5e875a53825945b4bb694bcad5ca760e as hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/namespace/11c0756bf0fbd526d9ce6d310df40bcb/info/5e875a53825945b4bb694bcad5ca760e 2023-07-17 22:15:49,091 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7480d5be746d4d438d0582804137fd27 2023-07-17 22:15:49,092 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/rsgroup/edea253b94d278b5e20395d0aaaa9641/.tmp/m/927e5beac14b47fea5877ff16c30014d as hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/rsgroup/edea253b94d278b5e20395d0aaaa9641/m/927e5beac14b47fea5877ff16c30014d 2023-07-17 22:15:49,095 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5e875a53825945b4bb694bcad5ca760e 2023-07-17 22:15:49,095 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/namespace/11c0756bf0fbd526d9ce6d310df40bcb/info/5e875a53825945b4bb694bcad5ca760e, entries=3, sequenceid=8, filesize=5.0 K 2023-07-17 22:15:49,097 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 11c0756bf0fbd526d9ce6d310df40bcb in 61ms, sequenceid=8, compaction requested=false 2023-07-17 22:15:49,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-17 22:15:49,098 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/rsgroup/edea253b94d278b5e20395d0aaaa9641/m/927e5beac14b47fea5877ff16c30014d, entries=1, sequenceid=7, filesize=4.9 K 2023-07-17 22:15:49,098 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:49,098 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:49,100 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:49,102 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for edea253b94d278b5e20395d0aaaa9641 in 66ms, sequenceid=7, compaction requested=false 2023-07-17 22:15:49,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-17 22:15:49,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/namespace/11c0756bf0fbd526d9ce6d310df40bcb/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-17 22:15:49,109 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb. 2023-07-17 22:15:49,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 11c0756bf0fbd526d9ce6d310df40bcb: 2023-07-17 22:15:49,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689632147074.11c0756bf0fbd526d9ce6d310df40bcb. 2023-07-17 22:15:49,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8dc45a7a1cbcdeef535b03350181395e, disabling compactions & flushes 2023-07-17 22:15:49,109 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e. 2023-07-17 22:15:49,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e. 2023-07-17 22:15:49,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e. after waiting 0 ms 2023-07-17 22:15:49,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e. 2023-07-17 22:15:49,111 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/.tmp/rep_barrier/bd32c42f5e6649c2977b8f9f50a3ddfa 2023-07-17 22:15:49,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/rsgroup/edea253b94d278b5e20395d0aaaa9641/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-17 22:15:49,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-17 22:15:49,113 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641. 2023-07-17 22:15:49,113 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for edea253b94d278b5e20395d0aaaa9641: 2023-07-17 22:15:49,113 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689632147140.edea253b94d278b5e20395d0aaaa9641. 2023-07-17 22:15:49,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/quota/8dc45a7a1cbcdeef535b03350181395e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:49,115 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e. 2023-07-17 22:15:49,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8dc45a7a1cbcdeef535b03350181395e: 2023-07-17 22:15:49,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689632147537.8dc45a7a1cbcdeef535b03350181395e. 2023-07-17 22:15:49,117 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bd32c42f5e6649c2977b8f9f50a3ddfa 2023-07-17 22:15:49,129 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/.tmp/table/526b7fc6ce574bcfbcc8de1774f33ec3 2023-07-17 22:15:49,134 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 526b7fc6ce574bcfbcc8de1774f33ec3 2023-07-17 22:15:49,135 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/.tmp/info/7480d5be746d4d438d0582804137fd27 as hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/info/7480d5be746d4d438d0582804137fd27 2023-07-17 22:15:49,140 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7480d5be746d4d438d0582804137fd27 2023-07-17 22:15:49,140 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/info/7480d5be746d4d438d0582804137fd27, entries=32, sequenceid=31, filesize=8.5 K 2023-07-17 22:15:49,141 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/.tmp/rep_barrier/bd32c42f5e6649c2977b8f9f50a3ddfa as hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/rep_barrier/bd32c42f5e6649c2977b8f9f50a3ddfa 2023-07-17 22:15:49,146 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:49,146 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:41139-0x101755b05d70002, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46645,1689632146227 2023-07-17 22:15:49,146 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:46645-0x101755b05d70001, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46645,1689632146227 2023-07-17 22:15:49,146 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:46645-0x101755b05d70001, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:49,146 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:41139-0x101755b05d70002, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:49,146 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:44063-0x101755b05d70003, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46645,1689632146227 2023-07-17 22:15:49,146 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:44063-0x101755b05d70003, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:49,146 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46645,1689632146227] 2023-07-17 22:15:49,146 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46645,1689632146227; numProcessing=1 2023-07-17 22:15:49,147 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46645,1689632146227 already deleted, retry=false 2023-07-17 22:15:49,148 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46645,1689632146227 expired; onlineServers=2 2023-07-17 22:15:49,148 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bd32c42f5e6649c2977b8f9f50a3ddfa 2023-07-17 22:15:49,149 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/rep_barrier/bd32c42f5e6649c2977b8f9f50a3ddfa, entries=1, sequenceid=31, filesize=4.9 K 2023-07-17 22:15:49,149 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/.tmp/table/526b7fc6ce574bcfbcc8de1774f33ec3 as hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/table/526b7fc6ce574bcfbcc8de1774f33ec3 2023-07-17 22:15:49,157 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 526b7fc6ce574bcfbcc8de1774f33ec3 2023-07-17 22:15:49,157 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/table/526b7fc6ce574bcfbcc8de1774f33ec3, entries=8, sequenceid=31, filesize=5.2 K 2023-07-17 22:15:49,158 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 115ms, sequenceid=31, compaction requested=false 2023-07-17 22:15:49,158 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-17 22:15:49,178 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-17 22:15:49,178 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-17 22:15:49,179 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-17 22:15:49,179 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-17 22:15:49,179 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-17 22:15:49,239 INFO [RS:2;jenkins-hbase4:44063] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44063,1689632146327; all regions closed. 2023-07-17 22:15:49,239 DEBUG [RS:2;jenkins-hbase4:44063] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-17 22:15:49,243 INFO [RS:1;jenkins-hbase4:41139] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41139,1689632146271; all regions closed. 2023-07-17 22:15:49,243 DEBUG [RS:1;jenkins-hbase4:41139] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-17 22:15:49,249 DEBUG [RS:2;jenkins-hbase4:44063] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/oldWALs 2023-07-17 22:15:49,249 INFO [RS:2;jenkins-hbase4:44063] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44063%2C1689632146327:(num 1689632146799) 2023-07-17 22:15:49,249 DEBUG [RS:2;jenkins-hbase4:44063] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:49,249 INFO [RS:2;jenkins-hbase4:44063] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:49,249 INFO [RS:2;jenkins-hbase4:44063] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-17 22:15:49,249 INFO [RS:2;jenkins-hbase4:44063] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 22:15:49,249 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 22:15:49,250 INFO [RS:2;jenkins-hbase4:44063] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 22:15:49,250 INFO [RS:2;jenkins-hbase4:44063] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 22:15:49,250 INFO [RS:2;jenkins-hbase4:44063] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44063 2023-07-17 22:15:49,253 DEBUG [RS:1;jenkins-hbase4:41139] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/oldWALs 2023-07-17 22:15:49,253 INFO [RS:1;jenkins-hbase4:41139] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41139%2C1689632146271.meta:.meta(num 1689632147007) 2023-07-17 22:15:49,254 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:41139-0x101755b05d70002, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44063,1689632146327 2023-07-17 22:15:49,254 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:44063-0x101755b05d70003, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44063,1689632146327 2023-07-17 22:15:49,254 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:49,255 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44063,1689632146327] 2023-07-17 22:15:49,256 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44063,1689632146327; numProcessing=2 2023-07-17 22:15:49,258 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44063,1689632146327 already deleted, retry=false 2023-07-17 22:15:49,258 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44063,1689632146327 expired; onlineServers=1 2023-07-17 22:15:49,260 DEBUG [RS:1;jenkins-hbase4:41139] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/oldWALs 2023-07-17 22:15:49,260 INFO [RS:1;jenkins-hbase4:41139] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41139%2C1689632146271:(num 1689632146798) 2023-07-17 22:15:49,260 DEBUG [RS:1;jenkins-hbase4:41139] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:49,260 INFO [RS:1;jenkins-hbase4:41139] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:49,260 INFO [RS:1;jenkins-hbase4:41139] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-17 22:15:49,260 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 22:15:49,261 INFO [RS:1;jenkins-hbase4:41139] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41139 2023-07-17 22:15:49,265 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:49,265 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:41139-0x101755b05d70002, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41139,1689632146271 2023-07-17 22:15:49,266 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41139,1689632146271] 2023-07-17 22:15:49,266 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41139,1689632146271; numProcessing=3 2023-07-17 22:15:49,267 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41139,1689632146271 already deleted, retry=false 2023-07-17 22:15:49,267 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41139,1689632146271 expired; onlineServers=0 2023-07-17 22:15:49,267 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46299,1689632146159' ***** 2023-07-17 22:15:49,267 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-17 22:15:49,269 DEBUG [M:0;jenkins-hbase4:46299] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6012c3b3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 22:15:49,269 INFO [M:0;jenkins-hbase4:46299] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 22:15:49,270 INFO [M:0;jenkins-hbase4:46299] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@342ff873{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-17 22:15:49,270 INFO [M:0;jenkins-hbase4:46299] server.AbstractConnector(383): Stopped ServerConnector@23515fca{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 22:15:49,271 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-17 22:15:49,271 INFO [M:0;jenkins-hbase4:46299] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 22:15:49,271 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:49,271 INFO [M:0;jenkins-hbase4:46299] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6dbadf33{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 22:15:49,271 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 22:15:49,271 INFO [M:0;jenkins-hbase4:46299] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@52285fac{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/hadoop.log.dir/,STOPPED} 2023-07-17 22:15:49,272 INFO [M:0;jenkins-hbase4:46299] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46299,1689632146159 2023-07-17 22:15:49,272 INFO [M:0;jenkins-hbase4:46299] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46299,1689632146159; all regions closed. 2023-07-17 22:15:49,272 DEBUG [M:0;jenkins-hbase4:46299] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:49,272 INFO [M:0;jenkins-hbase4:46299] master.HMaster(1491): Stopping master jetty server 2023-07-17 22:15:49,272 INFO [M:0;jenkins-hbase4:46299] server.AbstractConnector(383): Stopped ServerConnector@21d0e896{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 22:15:49,273 DEBUG [M:0;jenkins-hbase4:46299] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-17 22:15:49,273 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-17 22:15:49,273 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689632146534] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689632146534,5,FailOnTimeoutGroup] 2023-07-17 22:15:49,273 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689632146534] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689632146534,5,FailOnTimeoutGroup] 2023-07-17 22:15:49,273 DEBUG [M:0;jenkins-hbase4:46299] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-17 22:15:49,274 INFO [M:0;jenkins-hbase4:46299] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-17 22:15:49,274 INFO [M:0;jenkins-hbase4:46299] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-17 22:15:49,274 INFO [M:0;jenkins-hbase4:46299] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-17 22:15:49,274 DEBUG [M:0;jenkins-hbase4:46299] master.HMaster(1512): Stopping service threads 2023-07-17 22:15:49,274 INFO [M:0;jenkins-hbase4:46299] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-17 22:15:49,275 ERROR [M:0;jenkins-hbase4:46299] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-17 22:15:49,275 INFO [M:0;jenkins-hbase4:46299] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-17 22:15:49,275 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-17 22:15:49,275 DEBUG [M:0;jenkins-hbase4:46299] zookeeper.ZKUtil(398): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-17 22:15:49,275 WARN [M:0;jenkins-hbase4:46299] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-17 22:15:49,275 INFO [M:0;jenkins-hbase4:46299] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-17 22:15:49,276 INFO [M:0;jenkins-hbase4:46299] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-17 22:15:49,276 DEBUG [M:0;jenkins-hbase4:46299] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-17 22:15:49,276 INFO [M:0;jenkins-hbase4:46299] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:49,276 DEBUG [M:0;jenkins-hbase4:46299] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:49,276 DEBUG [M:0;jenkins-hbase4:46299] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-17 22:15:49,276 DEBUG [M:0;jenkins-hbase4:46299] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:49,276 INFO [M:0;jenkins-hbase4:46299] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.98 KB heapSize=109.13 KB 2023-07-17 22:15:49,289 INFO [M:0;jenkins-hbase4:46299] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.98 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/c7bd97c4289d4312982304001bc2a59b 2023-07-17 22:15:49,295 DEBUG [M:0;jenkins-hbase4:46299] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/c7bd97c4289d4312982304001bc2a59b as hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/c7bd97c4289d4312982304001bc2a59b 2023-07-17 22:15:49,300 INFO [M:0;jenkins-hbase4:46299] regionserver.HStore(1080): Added hdfs://localhost:41705/user/jenkins/test-data/7099b34d-0224-efd8-42f3-e84bf2e28500/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/c7bd97c4289d4312982304001bc2a59b, entries=24, sequenceid=194, filesize=12.4 K 2023-07-17 22:15:49,301 INFO [M:0;jenkins-hbase4:46299] regionserver.HRegion(2948): Finished flush of dataSize ~92.98 KB/95216, heapSize ~109.12 KB/111736, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 25ms, sequenceid=194, compaction requested=false 2023-07-17 22:15:49,303 INFO [M:0;jenkins-hbase4:46299] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:49,303 DEBUG [M:0;jenkins-hbase4:46299] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-17 22:15:49,308 INFO [M:0;jenkins-hbase4:46299] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-17 22:15:49,308 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 22:15:49,308 INFO [M:0;jenkins-hbase4:46299] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46299 2023-07-17 22:15:49,309 DEBUG [M:0;jenkins-hbase4:46299] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,46299,1689632146159 already deleted, retry=false 2023-07-17 22:15:49,509 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:49,509 INFO [M:0;jenkins-hbase4:46299] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46299,1689632146159; zookeeper connection closed. 2023-07-17 22:15:49,509 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): master:46299-0x101755b05d70000, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:49,609 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:41139-0x101755b05d70002, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:49,609 INFO [RS:1;jenkins-hbase4:41139] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41139,1689632146271; zookeeper connection closed. 2023-07-17 22:15:49,609 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:41139-0x101755b05d70002, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:49,611 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@35d9cb32] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@35d9cb32 2023-07-17 22:15:49,709 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:44063-0x101755b05d70003, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:49,709 INFO [RS:2;jenkins-hbase4:44063] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44063,1689632146327; zookeeper connection closed. 2023-07-17 22:15:49,709 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:44063-0x101755b05d70003, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:49,710 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@64937809] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@64937809 2023-07-17 22:15:49,810 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:46645-0x101755b05d70001, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:49,810 INFO [RS:0;jenkins-hbase4:46645] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46645,1689632146227; zookeeper connection closed. 2023-07-17 22:15:49,810 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): regionserver:46645-0x101755b05d70001, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:49,810 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@19af48ef] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@19af48ef 2023-07-17 22:15:49,810 INFO [Listener at localhost/42151] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-17 22:15:49,810 WARN [Listener at localhost/42151] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-17 22:15:49,816 INFO [Listener at localhost/42151] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 22:15:49,924 WARN [BP-2030503759-172.31.14.131-1689632145360 heartbeating to localhost/127.0.0.1:41705] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-17 22:15:49,924 WARN [BP-2030503759-172.31.14.131-1689632145360 heartbeating to localhost/127.0.0.1:41705] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2030503759-172.31.14.131-1689632145360 (Datanode Uuid 9ec323b8-b857-4dec-b84b-02d9a37a2db5) service to localhost/127.0.0.1:41705 2023-07-17 22:15:49,925 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/cluster_71a8ab50-1d50-f95a-46ce-9175e575a3cb/dfs/data/data5/current/BP-2030503759-172.31.14.131-1689632145360] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 22:15:49,925 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/cluster_71a8ab50-1d50-f95a-46ce-9175e575a3cb/dfs/data/data6/current/BP-2030503759-172.31.14.131-1689632145360] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 22:15:49,929 WARN [Listener at localhost/42151] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-17 22:15:49,936 INFO [Listener at localhost/42151] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 22:15:49,936 WARN [1389666582@qtp-1706189719-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46273] http.HttpServer2$SelectChannelConnectorWithSafeStartup(546): HttpServer Acceptor: isRunning is false. Rechecking. 2023-07-17 22:15:49,937 WARN [1389666582@qtp-1706189719-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46273] http.HttpServer2$SelectChannelConnectorWithSafeStartup(555): HttpServer Acceptor: isRunning is false 2023-07-17 22:15:50,041 WARN [BP-2030503759-172.31.14.131-1689632145360 heartbeating to localhost/127.0.0.1:41705] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-17 22:15:50,041 WARN [BP-2030503759-172.31.14.131-1689632145360 heartbeating to localhost/127.0.0.1:41705] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2030503759-172.31.14.131-1689632145360 (Datanode Uuid 9eec64c9-37cc-4fe4-bac0-51330008f3c2) service to localhost/127.0.0.1:41705 2023-07-17 22:15:50,041 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/cluster_71a8ab50-1d50-f95a-46ce-9175e575a3cb/dfs/data/data3/current/BP-2030503759-172.31.14.131-1689632145360] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 22:15:50,042 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/cluster_71a8ab50-1d50-f95a-46ce-9175e575a3cb/dfs/data/data4/current/BP-2030503759-172.31.14.131-1689632145360] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 22:15:50,043 WARN [Listener at localhost/42151] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-17 22:15:50,046 INFO [Listener at localhost/42151] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 22:15:50,149 WARN [BP-2030503759-172.31.14.131-1689632145360 heartbeating to localhost/127.0.0.1:41705] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-17 22:15:50,149 WARN [BP-2030503759-172.31.14.131-1689632145360 heartbeating to localhost/127.0.0.1:41705] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2030503759-172.31.14.131-1689632145360 (Datanode Uuid 81997728-5ea6-46a7-88ba-03eb1aa91543) service to localhost/127.0.0.1:41705 2023-07-17 22:15:50,149 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/cluster_71a8ab50-1d50-f95a-46ce-9175e575a3cb/dfs/data/data1/current/BP-2030503759-172.31.14.131-1689632145360] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 22:15:50,150 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/cluster_71a8ab50-1d50-f95a-46ce-9175e575a3cb/dfs/data/data2/current/BP-2030503759-172.31.14.131-1689632145360] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 22:15:50,160 INFO [Listener at localhost/42151] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 22:15:50,275 INFO [Listener at localhost/42151] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-17 22:15:50,302 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-17 22:15:50,303 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-17 22:15:50,303 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/hadoop.log.dir so I do NOT create it in target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3 2023-07-17 22:15:50,303 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d3cc4ba-1d5a-b6de-a45f-817535ead739/hadoop.tmp.dir so I do NOT create it in target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3 2023-07-17 22:15:50,303 INFO [Listener at localhost/42151] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455, deleteOnExit=true 2023-07-17 22:15:50,303 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-17 22:15:50,303 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/test.cache.data in system properties and HBase conf 2023-07-17 22:15:50,303 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/hadoop.tmp.dir in system properties and HBase conf 2023-07-17 22:15:50,303 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/hadoop.log.dir in system properties and HBase conf 2023-07-17 22:15:50,303 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-17 22:15:50,304 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-17 22:15:50,304 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-17 22:15:50,304 DEBUG [Listener at localhost/42151] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-17 22:15:50,304 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-17 22:15:50,304 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-17 22:15:50,304 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-17 22:15:50,304 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-17 22:15:50,305 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-17 22:15:50,305 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-17 22:15:50,305 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-17 22:15:50,305 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-17 22:15:50,305 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-17 22:15:50,305 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/nfs.dump.dir in system properties and HBase conf 2023-07-17 22:15:50,305 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/java.io.tmpdir in system properties and HBase conf 2023-07-17 22:15:50,305 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-17 22:15:50,305 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-17 22:15:50,306 INFO [Listener at localhost/42151] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-17 22:15:50,310 WARN [Listener at localhost/42151] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-17 22:15:50,310 WARN [Listener at localhost/42151] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-17 22:15:50,354 WARN [Listener at localhost/42151] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 22:15:50,356 INFO [Listener at localhost/42151] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 22:15:50,361 INFO [Listener at localhost/42151] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/java.io.tmpdir/Jetty_localhost_44843_hdfs____.37dqn3/webapp 2023-07-17 22:15:50,373 DEBUG [Listener at localhost/42151-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101755b05d7000a, quorum=127.0.0.1:52793, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-17 22:15:50,373 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101755b05d7000a, quorum=127.0.0.1:52793, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-17 22:15:50,460 INFO [Listener at localhost/42151] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44843 2023-07-17 22:15:50,465 WARN [Listener at localhost/42151] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-17 22:15:50,465 WARN [Listener at localhost/42151] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-17 22:15:50,508 WARN [Listener at localhost/46771] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 22:15:50,526 WARN [Listener at localhost/46771] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-17 22:15:50,530 WARN [Listener at localhost/46771] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 22:15:50,531 INFO [Listener at localhost/46771] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 22:15:50,537 INFO [Listener at localhost/46771] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/java.io.tmpdir/Jetty_localhost_44077_datanode____sxncdz/webapp 2023-07-17 22:15:50,630 INFO [Listener at localhost/46771] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44077 2023-07-17 22:15:50,641 WARN [Listener at localhost/43153] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 22:15:50,683 WARN [Listener at localhost/43153] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-17 22:15:50,685 WARN [Listener at localhost/43153] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 22:15:50,686 INFO [Listener at localhost/43153] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 22:15:50,694 INFO [Listener at localhost/43153] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/java.io.tmpdir/Jetty_localhost_39493_datanode____.4a0c1d/webapp 2023-07-17 22:15:50,773 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x16ef0c4afdeb7597: Processing first storage report for DS-2c7673a8-6660-4cab-8d85-738db2370e2e from datanode beab89f0-2fda-4c0b-b0a2-7ab9ea552411 2023-07-17 22:15:50,773 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x16ef0c4afdeb7597: from storage DS-2c7673a8-6660-4cab-8d85-738db2370e2e node DatanodeRegistration(127.0.0.1:39967, datanodeUuid=beab89f0-2fda-4c0b-b0a2-7ab9ea552411, infoPort=34313, infoSecurePort=0, ipcPort=43153, storageInfo=lv=-57;cid=testClusterID;nsid=530650479;c=1689632150312), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 22:15:50,773 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x16ef0c4afdeb7597: Processing first storage report for DS-fd624a2e-f206-4a35-b951-2504743b44ad from datanode beab89f0-2fda-4c0b-b0a2-7ab9ea552411 2023-07-17 22:15:50,773 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x16ef0c4afdeb7597: from storage DS-fd624a2e-f206-4a35-b951-2504743b44ad node DatanodeRegistration(127.0.0.1:39967, datanodeUuid=beab89f0-2fda-4c0b-b0a2-7ab9ea552411, infoPort=34313, infoSecurePort=0, ipcPort=43153, storageInfo=lv=-57;cid=testClusterID;nsid=530650479;c=1689632150312), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 22:15:50,799 INFO [Listener at localhost/43153] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39493 2023-07-17 22:15:50,806 WARN [Listener at localhost/36903] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 22:15:50,827 WARN [Listener at localhost/36903] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-17 22:15:50,829 WARN [Listener at localhost/36903] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 22:15:50,830 INFO [Listener at localhost/36903] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 22:15:50,833 INFO [Listener at localhost/36903] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/java.io.tmpdir/Jetty_localhost_39439_datanode____grv1wb/webapp 2023-07-17 22:15:50,907 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x667aea242c6427c0: Processing first storage report for DS-830fe05b-b6b7-46e5-b89f-17c9c35d0c0b from datanode ce28f8a5-8b75-4507-ad20-d9ae63e623cf 2023-07-17 22:15:50,907 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x667aea242c6427c0: from storage DS-830fe05b-b6b7-46e5-b89f-17c9c35d0c0b node DatanodeRegistration(127.0.0.1:36521, datanodeUuid=ce28f8a5-8b75-4507-ad20-d9ae63e623cf, infoPort=39185, infoSecurePort=0, ipcPort=36903, storageInfo=lv=-57;cid=testClusterID;nsid=530650479;c=1689632150312), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 22:15:50,907 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x667aea242c6427c0: Processing first storage report for DS-f42b96ec-00dd-4f7e-8aa7-d1dd066be7e8 from datanode ce28f8a5-8b75-4507-ad20-d9ae63e623cf 2023-07-17 22:15:50,907 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x667aea242c6427c0: from storage DS-f42b96ec-00dd-4f7e-8aa7-d1dd066be7e8 node DatanodeRegistration(127.0.0.1:36521, datanodeUuid=ce28f8a5-8b75-4507-ad20-d9ae63e623cf, infoPort=39185, infoSecurePort=0, ipcPort=36903, storageInfo=lv=-57;cid=testClusterID;nsid=530650479;c=1689632150312), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 22:15:50,931 INFO [Listener at localhost/36903] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39439 2023-07-17 22:15:50,942 WARN [Listener at localhost/44229] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 22:15:51,049 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x53e636cb703986e1: Processing first storage report for DS-bc130240-e495-4017-b698-2d0bde6f5868 from datanode 4e59c173-99d5-44bd-ac8e-2eed8a828332 2023-07-17 22:15:51,049 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x53e636cb703986e1: from storage DS-bc130240-e495-4017-b698-2d0bde6f5868 node DatanodeRegistration(127.0.0.1:37475, datanodeUuid=4e59c173-99d5-44bd-ac8e-2eed8a828332, infoPort=34683, infoSecurePort=0, ipcPort=44229, storageInfo=lv=-57;cid=testClusterID;nsid=530650479;c=1689632150312), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 22:15:51,049 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x53e636cb703986e1: Processing first storage report for DS-935259d3-4c13-47a6-b2cb-341d5ee81217 from datanode 4e59c173-99d5-44bd-ac8e-2eed8a828332 2023-07-17 22:15:51,049 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x53e636cb703986e1: from storage DS-935259d3-4c13-47a6-b2cb-341d5ee81217 node DatanodeRegistration(127.0.0.1:37475, datanodeUuid=4e59c173-99d5-44bd-ac8e-2eed8a828332, infoPort=34683, infoSecurePort=0, ipcPort=44229, storageInfo=lv=-57;cid=testClusterID;nsid=530650479;c=1689632150312), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 22:15:51,150 DEBUG [Listener at localhost/44229] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3 2023-07-17 22:15:51,156 INFO [Listener at localhost/44229] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/zookeeper_0, clientPort=53229, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-17 22:15:51,157 INFO [Listener at localhost/44229] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=53229 2023-07-17 22:15:51,158 INFO [Listener at localhost/44229] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:51,159 INFO [Listener at localhost/44229] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:51,183 INFO [Listener at localhost/44229] util.FSUtils(471): Created version file at hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb with version=8 2023-07-17 22:15:51,183 INFO [Listener at localhost/44229] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:38457/user/jenkins/test-data/c3a58bff-3240-2e0d-df82-3011b49bff9b/hbase-staging 2023-07-17 22:15:51,184 DEBUG [Listener at localhost/44229] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-17 22:15:51,184 DEBUG [Listener at localhost/44229] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-17 22:15:51,184 DEBUG [Listener at localhost/44229] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-17 22:15:51,184 DEBUG [Listener at localhost/44229] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-17 22:15:51,185 INFO [Listener at localhost/44229] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 22:15:51,185 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:51,186 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:51,186 INFO [Listener at localhost/44229] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 22:15:51,186 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:51,186 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 22:15:51,186 INFO [Listener at localhost/44229] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 22:15:51,187 INFO [Listener at localhost/44229] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37449 2023-07-17 22:15:51,187 INFO [Listener at localhost/44229] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:51,188 INFO [Listener at localhost/44229] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:51,189 INFO [Listener at localhost/44229] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37449 connecting to ZooKeeper ensemble=127.0.0.1:53229 2023-07-17 22:15:51,201 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:374490x0, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 22:15:51,202 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37449-0x101755b19770000 connected 2023-07-17 22:15:51,215 DEBUG [Listener at localhost/44229] zookeeper.ZKUtil(164): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 22:15:51,216 DEBUG [Listener at localhost/44229] zookeeper.ZKUtil(164): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:51,216 DEBUG [Listener at localhost/44229] zookeeper.ZKUtil(164): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 22:15:51,216 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37449 2023-07-17 22:15:51,217 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37449 2023-07-17 22:15:51,217 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37449 2023-07-17 22:15:51,217 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37449 2023-07-17 22:15:51,217 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37449 2023-07-17 22:15:51,219 INFO [Listener at localhost/44229] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 22:15:51,219 INFO [Listener at localhost/44229] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 22:15:51,220 INFO [Listener at localhost/44229] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 22:15:51,220 INFO [Listener at localhost/44229] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-17 22:15:51,220 INFO [Listener at localhost/44229] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 22:15:51,220 INFO [Listener at localhost/44229] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 22:15:51,220 INFO [Listener at localhost/44229] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 22:15:51,221 INFO [Listener at localhost/44229] http.HttpServer(1146): Jetty bound to port 41999 2023-07-17 22:15:51,221 INFO [Listener at localhost/44229] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 22:15:51,235 INFO [Listener at localhost/44229] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:51,235 INFO [Listener at localhost/44229] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@257227ca{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/hadoop.log.dir/,AVAILABLE} 2023-07-17 22:15:51,236 INFO [Listener at localhost/44229] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:51,236 INFO [Listener at localhost/44229] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4f269b40{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 22:15:51,244 INFO [Listener at localhost/44229] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 22:15:51,245 INFO [Listener at localhost/44229] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 22:15:51,245 INFO [Listener at localhost/44229] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 22:15:51,245 INFO [Listener at localhost/44229] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-17 22:15:51,246 INFO [Listener at localhost/44229] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:51,247 INFO [Listener at localhost/44229] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@17bb529f{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-17 22:15:51,248 INFO [Listener at localhost/44229] server.AbstractConnector(333): Started ServerConnector@43a47f7f{HTTP/1.1, (http/1.1)}{0.0.0.0:41999} 2023-07-17 22:15:51,248 INFO [Listener at localhost/44229] server.Server(415): Started @41521ms 2023-07-17 22:15:51,248 INFO [Listener at localhost/44229] master.HMaster(444): hbase.rootdir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb, hbase.cluster.distributed=false 2023-07-17 22:15:51,261 INFO [Listener at localhost/44229] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 22:15:51,261 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:51,261 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:51,261 INFO [Listener at localhost/44229] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 22:15:51,261 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:51,261 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 22:15:51,261 INFO [Listener at localhost/44229] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 22:15:51,262 INFO [Listener at localhost/44229] ipc.NettyRpcServer(120): Bind to /172.31.14.131:32813 2023-07-17 22:15:51,262 INFO [Listener at localhost/44229] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 22:15:51,263 DEBUG [Listener at localhost/44229] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 22:15:51,264 INFO [Listener at localhost/44229] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:51,265 INFO [Listener at localhost/44229] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:51,266 INFO [Listener at localhost/44229] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:32813 connecting to ZooKeeper ensemble=127.0.0.1:53229 2023-07-17 22:15:51,269 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:328130x0, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 22:15:51,270 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:32813-0x101755b19770001 connected 2023-07-17 22:15:51,271 DEBUG [Listener at localhost/44229] zookeeper.ZKUtil(164): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 22:15:51,271 DEBUG [Listener at localhost/44229] zookeeper.ZKUtil(164): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:51,272 DEBUG [Listener at localhost/44229] zookeeper.ZKUtil(164): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 22:15:51,272 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=32813 2023-07-17 22:15:51,272 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=32813 2023-07-17 22:15:51,273 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=32813 2023-07-17 22:15:51,273 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=32813 2023-07-17 22:15:51,273 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=32813 2023-07-17 22:15:51,275 INFO [Listener at localhost/44229] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 22:15:51,275 INFO [Listener at localhost/44229] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 22:15:51,275 INFO [Listener at localhost/44229] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 22:15:51,276 INFO [Listener at localhost/44229] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 22:15:51,276 INFO [Listener at localhost/44229] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 22:15:51,276 INFO [Listener at localhost/44229] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 22:15:51,276 INFO [Listener at localhost/44229] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 22:15:51,277 INFO [Listener at localhost/44229] http.HttpServer(1146): Jetty bound to port 35627 2023-07-17 22:15:51,277 INFO [Listener at localhost/44229] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 22:15:51,279 INFO [Listener at localhost/44229] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:51,280 INFO [Listener at localhost/44229] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@19753151{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/hadoop.log.dir/,AVAILABLE} 2023-07-17 22:15:51,280 INFO [Listener at localhost/44229] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:51,280 INFO [Listener at localhost/44229] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@27b9cbdb{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 22:15:51,286 INFO [Listener at localhost/44229] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 22:15:51,287 INFO [Listener at localhost/44229] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 22:15:51,287 INFO [Listener at localhost/44229] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 22:15:51,287 INFO [Listener at localhost/44229] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-17 22:15:51,288 INFO [Listener at localhost/44229] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:51,289 INFO [Listener at localhost/44229] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5a351aef{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:51,290 INFO [Listener at localhost/44229] server.AbstractConnector(333): Started ServerConnector@879cd83{HTTP/1.1, (http/1.1)}{0.0.0.0:35627} 2023-07-17 22:15:51,290 INFO [Listener at localhost/44229] server.Server(415): Started @41563ms 2023-07-17 22:15:51,301 INFO [Listener at localhost/44229] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 22:15:51,301 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:51,301 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:51,301 INFO [Listener at localhost/44229] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 22:15:51,301 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:51,301 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 22:15:51,302 INFO [Listener at localhost/44229] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 22:15:51,302 INFO [Listener at localhost/44229] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34241 2023-07-17 22:15:51,303 INFO [Listener at localhost/44229] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 22:15:51,304 DEBUG [Listener at localhost/44229] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 22:15:51,304 INFO [Listener at localhost/44229] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:51,305 INFO [Listener at localhost/44229] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:51,306 INFO [Listener at localhost/44229] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34241 connecting to ZooKeeper ensemble=127.0.0.1:53229 2023-07-17 22:15:51,309 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:342410x0, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 22:15:51,310 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34241-0x101755b19770002 connected 2023-07-17 22:15:51,310 DEBUG [Listener at localhost/44229] zookeeper.ZKUtil(164): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 22:15:51,311 DEBUG [Listener at localhost/44229] zookeeper.ZKUtil(164): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:51,311 DEBUG [Listener at localhost/44229] zookeeper.ZKUtil(164): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 22:15:51,312 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34241 2023-07-17 22:15:51,312 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34241 2023-07-17 22:15:51,312 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34241 2023-07-17 22:15:51,312 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34241 2023-07-17 22:15:51,313 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34241 2023-07-17 22:15:51,314 INFO [Listener at localhost/44229] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 22:15:51,314 INFO [Listener at localhost/44229] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 22:15:51,314 INFO [Listener at localhost/44229] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 22:15:51,315 INFO [Listener at localhost/44229] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 22:15:51,315 INFO [Listener at localhost/44229] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 22:15:51,315 INFO [Listener at localhost/44229] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 22:15:51,315 INFO [Listener at localhost/44229] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 22:15:51,315 INFO [Listener at localhost/44229] http.HttpServer(1146): Jetty bound to port 46383 2023-07-17 22:15:51,316 INFO [Listener at localhost/44229] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 22:15:51,317 INFO [Listener at localhost/44229] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:51,317 INFO [Listener at localhost/44229] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@59e8e2d8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/hadoop.log.dir/,AVAILABLE} 2023-07-17 22:15:51,317 INFO [Listener at localhost/44229] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:51,317 INFO [Listener at localhost/44229] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2c9668cb{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 22:15:51,321 INFO [Listener at localhost/44229] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 22:15:51,322 INFO [Listener at localhost/44229] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 22:15:51,322 INFO [Listener at localhost/44229] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 22:15:51,322 INFO [Listener at localhost/44229] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-17 22:15:51,323 INFO [Listener at localhost/44229] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:51,323 INFO [Listener at localhost/44229] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@28b0b21b{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:51,325 INFO [Listener at localhost/44229] server.AbstractConnector(333): Started ServerConnector@5bd01aa{HTTP/1.1, (http/1.1)}{0.0.0.0:46383} 2023-07-17 22:15:51,325 INFO [Listener at localhost/44229] server.Server(415): Started @41598ms 2023-07-17 22:15:51,336 INFO [Listener at localhost/44229] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 22:15:51,336 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:51,336 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:51,336 INFO [Listener at localhost/44229] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 22:15:51,336 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:51,336 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 22:15:51,336 INFO [Listener at localhost/44229] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 22:15:51,337 INFO [Listener at localhost/44229] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36311 2023-07-17 22:15:51,337 INFO [Listener at localhost/44229] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 22:15:51,338 DEBUG [Listener at localhost/44229] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 22:15:51,338 INFO [Listener at localhost/44229] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:51,339 INFO [Listener at localhost/44229] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:51,340 INFO [Listener at localhost/44229] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36311 connecting to ZooKeeper ensemble=127.0.0.1:53229 2023-07-17 22:15:51,343 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:363110x0, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 22:15:51,344 DEBUG [Listener at localhost/44229] zookeeper.ZKUtil(164): regionserver:363110x0, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 22:15:51,344 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36311-0x101755b19770003 connected 2023-07-17 22:15:51,345 DEBUG [Listener at localhost/44229] zookeeper.ZKUtil(164): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:51,345 DEBUG [Listener at localhost/44229] zookeeper.ZKUtil(164): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 22:15:51,345 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36311 2023-07-17 22:15:51,346 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36311 2023-07-17 22:15:51,346 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36311 2023-07-17 22:15:51,346 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36311 2023-07-17 22:15:51,347 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36311 2023-07-17 22:15:51,349 INFO [Listener at localhost/44229] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 22:15:51,349 INFO [Listener at localhost/44229] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 22:15:51,349 INFO [Listener at localhost/44229] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 22:15:51,350 INFO [Listener at localhost/44229] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 22:15:51,350 INFO [Listener at localhost/44229] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 22:15:51,350 INFO [Listener at localhost/44229] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 22:15:51,350 INFO [Listener at localhost/44229] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 22:15:51,351 INFO [Listener at localhost/44229] http.HttpServer(1146): Jetty bound to port 36935 2023-07-17 22:15:51,351 INFO [Listener at localhost/44229] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 22:15:51,360 INFO [Listener at localhost/44229] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:51,361 INFO [Listener at localhost/44229] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@624f13b8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/hadoop.log.dir/,AVAILABLE} 2023-07-17 22:15:51,361 INFO [Listener at localhost/44229] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:51,361 INFO [Listener at localhost/44229] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5e2487a8{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 22:15:51,366 INFO [Listener at localhost/44229] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 22:15:51,367 INFO [Listener at localhost/44229] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 22:15:51,367 INFO [Listener at localhost/44229] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 22:15:51,368 INFO [Listener at localhost/44229] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-17 22:15:51,368 INFO [Listener at localhost/44229] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:51,369 INFO [Listener at localhost/44229] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@46a2470b{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:51,371 INFO [Listener at localhost/44229] server.AbstractConnector(333): Started ServerConnector@85cae1c{HTTP/1.1, (http/1.1)}{0.0.0.0:36935} 2023-07-17 22:15:51,371 INFO [Listener at localhost/44229] server.Server(415): Started @41645ms 2023-07-17 22:15:51,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 22:15:51,396 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@13e6a5f3{HTTP/1.1, (http/1.1)}{0.0.0.0:34603} 2023-07-17 22:15:51,396 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @41669ms 2023-07-17 22:15:51,396 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,37449,1689632151185 2023-07-17 22:15:51,399 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-17 22:15:51,399 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,37449,1689632151185 2023-07-17 22:15:51,400 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 22:15:51,400 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 22:15:51,400 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 22:15:51,400 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:51,400 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 22:15:51,402 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-17 22:15:51,403 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,37449,1689632151185 from backup master directory 2023-07-17 22:15:51,404 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-17 22:15:51,405 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,37449,1689632151185 2023-07-17 22:15:51,405 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-17 22:15:51,405 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 22:15:51,405 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,37449,1689632151185 2023-07-17 22:15:51,419 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/hbase.id with ID: 5007c51a-c4c6-4afd-9880-cacc55550013 2023-07-17 22:15:51,428 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:51,430 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:51,438 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x775834a5 to 127.0.0.1:53229 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:51,443 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@237bda4b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:51,443 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:51,444 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-17 22:15:51,444 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 22:15:51,446 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/MasterData/data/master/store-tmp 2023-07-17 22:15:51,455 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:51,455 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-17 22:15:51,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:51,455 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:51,455 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-17 22:15:51,455 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:51,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:51,455 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-17 22:15:51,456 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/MasterData/WALs/jenkins-hbase4.apache.org,37449,1689632151185 2023-07-17 22:15:51,459 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37449%2C1689632151185, suffix=, logDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/MasterData/WALs/jenkins-hbase4.apache.org,37449,1689632151185, archiveDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/MasterData/oldWALs, maxLogs=10 2023-07-17 22:15:51,475 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39967,DS-2c7673a8-6660-4cab-8d85-738db2370e2e,DISK] 2023-07-17 22:15:51,476 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37475,DS-bc130240-e495-4017-b698-2d0bde6f5868,DISK] 2023-07-17 22:15:51,479 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36521,DS-830fe05b-b6b7-46e5-b89f-17c9c35d0c0b,DISK] 2023-07-17 22:15:51,483 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/MasterData/WALs/jenkins-hbase4.apache.org,37449,1689632151185/jenkins-hbase4.apache.org%2C37449%2C1689632151185.1689632151459 2023-07-17 22:15:51,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39967,DS-2c7673a8-6660-4cab-8d85-738db2370e2e,DISK], DatanodeInfoWithStorage[127.0.0.1:37475,DS-bc130240-e495-4017-b698-2d0bde6f5868,DISK], DatanodeInfoWithStorage[127.0.0.1:36521,DS-830fe05b-b6b7-46e5-b89f-17c9c35d0c0b,DISK]] 2023-07-17 22:15:51,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:51,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:51,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-17 22:15:51,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-17 22:15:51,485 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-17 22:15:51,486 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-17 22:15:51,486 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-17 22:15:51,487 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:51,488 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-17 22:15:51,488 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-17 22:15:51,490 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-17 22:15:51,492 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:51,492 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10804929920, jitterRate=0.006287515163421631}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:51,492 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-17 22:15:51,492 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-17 22:15:51,494 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-17 22:15:51,494 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-17 22:15:51,494 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-17 22:15:51,494 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-17 22:15:51,495 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-17 22:15:51,495 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-17 22:15:51,495 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-17 22:15:51,496 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-17 22:15:51,497 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-17 22:15:51,497 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-17 22:15:51,498 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-17 22:15:51,504 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:51,504 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-17 22:15:51,505 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-17 22:15:51,506 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-17 22:15:51,507 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:51,507 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:51,507 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:51,507 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:51,507 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:51,510 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,37449,1689632151185, sessionid=0x101755b19770000, setting cluster-up flag (Was=false) 2023-07-17 22:15:51,514 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:51,518 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-17 22:15:51,519 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37449,1689632151185 2023-07-17 22:15:51,522 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:51,526 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-17 22:15:51,527 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37449,1689632151185 2023-07-17 22:15:51,528 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.hbase-snapshot/.tmp 2023-07-17 22:15:51,529 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-17 22:15:51,529 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-17 22:15:51,529 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-17 22:15:51,530 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37449,1689632151185] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 22:15:51,530 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-17 22:15:51,531 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-17 22:15:51,543 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-17 22:15:51,543 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-17 22:15:51,544 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-17 22:15:51,544 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-17 22:15:51,544 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 22:15:51,544 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 22:15:51,544 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 22:15:51,544 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 22:15:51,544 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-17 22:15:51,544 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,544 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 22:15:51,544 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,546 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689632181546 2023-07-17 22:15:51,546 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-17 22:15:51,546 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-17 22:15:51,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-17 22:15:51,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-17 22:15:51,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-17 22:15:51,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-17 22:15:51,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,547 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-17 22:15:51,547 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-17 22:15:51,548 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:51,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-17 22:15:51,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-17 22:15:51,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-17 22:15:51,552 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-17 22:15:51,552 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-17 22:15:51,552 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689632151552,5,FailOnTimeoutGroup] 2023-07-17 22:15:51,554 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689632151552,5,FailOnTimeoutGroup] 2023-07-17 22:15:51,554 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,556 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-17 22:15:51,556 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,556 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,567 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:51,568 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:51,568 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb 2023-07-17 22:15:51,574 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(951): ClusterId : 5007c51a-c4c6-4afd-9880-cacc55550013 2023-07-17 22:15:51,574 DEBUG [RS:0;jenkins-hbase4:32813] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 22:15:51,577 DEBUG [RS:0;jenkins-hbase4:32813] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 22:15:51,577 INFO [RS:1;jenkins-hbase4:34241] regionserver.HRegionServer(951): ClusterId : 5007c51a-c4c6-4afd-9880-cacc55550013 2023-07-17 22:15:51,577 INFO [RS:2;jenkins-hbase4:36311] regionserver.HRegionServer(951): ClusterId : 5007c51a-c4c6-4afd-9880-cacc55550013 2023-07-17 22:15:51,577 DEBUG [RS:0;jenkins-hbase4:32813] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 22:15:51,577 DEBUG [RS:2;jenkins-hbase4:36311] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 22:15:51,577 DEBUG [RS:1;jenkins-hbase4:34241] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 22:15:51,580 DEBUG [RS:0;jenkins-hbase4:32813] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 22:15:51,580 DEBUG [RS:2;jenkins-hbase4:36311] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 22:15:51,580 DEBUG [RS:2;jenkins-hbase4:36311] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 22:15:51,580 DEBUG [RS:1;jenkins-hbase4:34241] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 22:15:51,580 DEBUG [RS:1;jenkins-hbase4:34241] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 22:15:51,582 DEBUG [RS:2;jenkins-hbase4:36311] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 22:15:51,587 DEBUG [RS:1;jenkins-hbase4:34241] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 22:15:51,589 DEBUG [RS:0;jenkins-hbase4:32813] zookeeper.ReadOnlyZKClient(139): Connect 0x77869323 to 127.0.0.1:53229 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:51,591 DEBUG [RS:2;jenkins-hbase4:36311] zookeeper.ReadOnlyZKClient(139): Connect 0x7bee2bd3 to 127.0.0.1:53229 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:51,595 DEBUG [RS:1;jenkins-hbase4:34241] zookeeper.ReadOnlyZKClient(139): Connect 0x2d28b3ea to 127.0.0.1:53229 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:51,604 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:51,605 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-17 22:15:51,606 DEBUG [RS:0;jenkins-hbase4:32813] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@73b9190b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:51,607 DEBUG [RS:0;jenkins-hbase4:32813] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@36bb9918, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 22:15:51,607 DEBUG [RS:1;jenkins-hbase4:34241] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5d553837, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:51,607 DEBUG [RS:1;jenkins-hbase4:34241] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c4fd97d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 22:15:51,607 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/info 2023-07-17 22:15:51,608 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-17 22:15:51,608 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:51,608 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-17 22:15:51,610 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/rep_barrier 2023-07-17 22:15:51,610 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-17 22:15:51,611 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:51,611 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-17 22:15:51,615 DEBUG [RS:2;jenkins-hbase4:36311] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7fd1a076, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:51,615 DEBUG [RS:2;jenkins-hbase4:36311] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3ad321e8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 22:15:51,616 DEBUG [RS:0;jenkins-hbase4:32813] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:32813 2023-07-17 22:15:51,616 INFO [RS:0;jenkins-hbase4:32813] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 22:15:51,616 INFO [RS:0;jenkins-hbase4:32813] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 22:15:51,616 DEBUG [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 22:15:51,617 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37449,1689632151185 with isa=jenkins-hbase4.apache.org/172.31.14.131:32813, startcode=1689632151260 2023-07-17 22:15:51,617 DEBUG [RS:0;jenkins-hbase4:32813] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 22:15:51,619 DEBUG [RS:1;jenkins-hbase4:34241] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:34241 2023-07-17 22:15:51,619 INFO [RS:1;jenkins-hbase4:34241] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 22:15:51,619 INFO [RS:1;jenkins-hbase4:34241] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 22:15:51,619 DEBUG [RS:1;jenkins-hbase4:34241] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 22:15:51,619 INFO [RS:1;jenkins-hbase4:34241] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37449,1689632151185 with isa=jenkins-hbase4.apache.org/172.31.14.131:34241, startcode=1689632151301 2023-07-17 22:15:51,620 DEBUG [RS:1;jenkins-hbase4:34241] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 22:15:51,620 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44745, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 22:15:51,621 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/table 2023-07-17 22:15:51,621 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-17 22:15:51,622 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34245, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 22:15:51,622 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:51,623 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37449] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,32813,1689632151260 2023-07-17 22:15:51,623 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37449,1689632151185] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 22:15:51,624 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37449,1689632151185] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-17 22:15:51,624 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37449] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34241,1689632151301 2023-07-17 22:15:51,624 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37449,1689632151185] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 22:15:51,624 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37449,1689632151185] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-17 22:15:51,624 DEBUG [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb 2023-07-17 22:15:51,624 DEBUG [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46771 2023-07-17 22:15:51,624 DEBUG [RS:1;jenkins-hbase4:34241] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb 2023-07-17 22:15:51,624 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740 2023-07-17 22:15:51,624 DEBUG [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41999 2023-07-17 22:15:51,624 DEBUG [RS:1;jenkins-hbase4:34241] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46771 2023-07-17 22:15:51,625 DEBUG [RS:1;jenkins-hbase4:34241] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41999 2023-07-17 22:15:51,625 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740 2023-07-17 22:15:51,627 DEBUG [RS:2;jenkins-hbase4:36311] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:36311 2023-07-17 22:15:51,627 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-17 22:15:51,627 INFO [RS:2;jenkins-hbase4:36311] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 22:15:51,627 INFO [RS:2;jenkins-hbase4:36311] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 22:15:51,627 DEBUG [RS:2;jenkins-hbase4:36311] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 22:15:51,629 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-17 22:15:51,629 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:51,629 INFO [RS:2;jenkins-hbase4:36311] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37449,1689632151185 with isa=jenkins-hbase4.apache.org/172.31.14.131:36311, startcode=1689632151335 2023-07-17 22:15:51,629 DEBUG [RS:2;jenkins-hbase4:36311] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 22:15:51,629 DEBUG [RS:1;jenkins-hbase4:34241] zookeeper.ZKUtil(162): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34241,1689632151301 2023-07-17 22:15:51,629 WARN [RS:1;jenkins-hbase4:34241] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 22:15:51,630 INFO [RS:1;jenkins-hbase4:34241] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 22:15:51,630 DEBUG [RS:1;jenkins-hbase4:34241] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/WALs/jenkins-hbase4.apache.org,34241,1689632151301 2023-07-17 22:15:51,630 DEBUG [RS:0;jenkins-hbase4:32813] zookeeper.ZKUtil(162): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32813,1689632151260 2023-07-17 22:15:51,630 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56367, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 22:15:51,630 WARN [RS:0;jenkins-hbase4:32813] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 22:15:51,631 INFO [RS:0;jenkins-hbase4:32813] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 22:15:51,631 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37449] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36311,1689632151335 2023-07-17 22:15:51,631 DEBUG [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/WALs/jenkins-hbase4.apache.org,32813,1689632151260 2023-07-17 22:15:51,631 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37449,1689632151185] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 22:15:51,631 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37449,1689632151185] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-17 22:15:51,631 DEBUG [RS:2;jenkins-hbase4:36311] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb 2023-07-17 22:15:51,631 DEBUG [RS:2;jenkins-hbase4:36311] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46771 2023-07-17 22:15:51,631 DEBUG [RS:2;jenkins-hbase4:36311] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41999 2023-07-17 22:15:51,633 DEBUG [RS:2;jenkins-hbase4:36311] zookeeper.ZKUtil(162): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36311,1689632151335 2023-07-17 22:15:51,633 WARN [RS:2;jenkins-hbase4:36311] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 22:15:51,633 INFO [RS:2;jenkins-hbase4:36311] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 22:15:51,633 DEBUG [RS:2;jenkins-hbase4:36311] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/WALs/jenkins-hbase4.apache.org,36311,1689632151335 2023-07-17 22:15:51,637 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:51,641 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10216676800, jitterRate=-0.04849782586097717}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-17 22:15:51,641 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36311,1689632151335] 2023-07-17 22:15:51,641 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,32813,1689632151260] 2023-07-17 22:15:51,641 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34241,1689632151301] 2023-07-17 22:15:51,641 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-17 22:15:51,641 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-17 22:15:51,641 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-17 22:15:51,641 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-17 22:15:51,642 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-17 22:15:51,642 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-17 22:15:51,642 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-17 22:15:51,642 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-17 22:15:51,642 DEBUG [RS:1;jenkins-hbase4:34241] zookeeper.ZKUtil(162): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32813,1689632151260 2023-07-17 22:15:51,642 DEBUG [RS:0;jenkins-hbase4:32813] zookeeper.ZKUtil(162): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32813,1689632151260 2023-07-17 22:15:51,643 DEBUG [RS:0;jenkins-hbase4:32813] zookeeper.ZKUtil(162): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36311,1689632151335 2023-07-17 22:15:51,643 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-17 22:15:51,643 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-17 22:15:51,643 DEBUG [RS:1;jenkins-hbase4:34241] zookeeper.ZKUtil(162): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36311,1689632151335 2023-07-17 22:15:51,643 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-17 22:15:51,643 DEBUG [RS:0;jenkins-hbase4:32813] zookeeper.ZKUtil(162): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34241,1689632151301 2023-07-17 22:15:51,644 DEBUG [RS:1;jenkins-hbase4:34241] zookeeper.ZKUtil(162): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34241,1689632151301 2023-07-17 22:15:51,644 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-17 22:15:51,644 DEBUG [RS:0;jenkins-hbase4:32813] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 22:15:51,644 DEBUG [RS:1;jenkins-hbase4:34241] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 22:15:51,645 INFO [RS:0;jenkins-hbase4:32813] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 22:15:51,645 INFO [RS:1;jenkins-hbase4:34241] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 22:15:51,646 DEBUG [RS:2;jenkins-hbase4:36311] zookeeper.ZKUtil(162): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32813,1689632151260 2023-07-17 22:15:51,646 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-17 22:15:51,646 DEBUG [RS:2;jenkins-hbase4:36311] zookeeper.ZKUtil(162): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36311,1689632151335 2023-07-17 22:15:51,647 DEBUG [RS:2;jenkins-hbase4:36311] zookeeper.ZKUtil(162): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34241,1689632151301 2023-07-17 22:15:51,647 INFO [RS:0;jenkins-hbase4:32813] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 22:15:51,647 DEBUG [RS:2;jenkins-hbase4:36311] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 22:15:51,648 INFO [RS:2;jenkins-hbase4:36311] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 22:15:51,650 INFO [RS:1;jenkins-hbase4:34241] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 22:15:51,651 INFO [RS:2;jenkins-hbase4:36311] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 22:15:51,651 INFO [RS:0;jenkins-hbase4:32813] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 22:15:51,651 INFO [RS:1;jenkins-hbase4:34241] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 22:15:51,651 INFO [RS:0;jenkins-hbase4:32813] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,651 INFO [RS:1;jenkins-hbase4:34241] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,651 INFO [RS:2;jenkins-hbase4:36311] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 22:15:51,651 INFO [RS:2;jenkins-hbase4:36311] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,651 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 22:15:51,652 INFO [RS:1;jenkins-hbase4:34241] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 22:15:51,652 INFO [RS:2;jenkins-hbase4:36311] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 22:15:51,653 INFO [RS:0;jenkins-hbase4:32813] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,653 INFO [RS:2;jenkins-hbase4:36311] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,654 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,654 DEBUG [RS:2;jenkins-hbase4:36311] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,654 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,654 DEBUG [RS:2;jenkins-hbase4:36311] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,654 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,654 DEBUG [RS:2;jenkins-hbase4:36311] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,655 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,655 DEBUG [RS:2;jenkins-hbase4:36311] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,655 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,655 DEBUG [RS:2;jenkins-hbase4:36311] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,654 INFO [RS:1;jenkins-hbase4:34241] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,655 DEBUG [RS:2;jenkins-hbase4:36311] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 22:15:51,655 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 22:15:51,655 DEBUG [RS:2;jenkins-hbase4:36311] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,655 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,655 DEBUG [RS:2;jenkins-hbase4:36311] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,655 DEBUG [RS:1;jenkins-hbase4:34241] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,655 DEBUG [RS:2;jenkins-hbase4:36311] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,655 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,655 DEBUG [RS:2;jenkins-hbase4:36311] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,655 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,655 DEBUG [RS:1;jenkins-hbase4:34241] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,655 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,655 DEBUG [RS:1;jenkins-hbase4:34241] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,655 DEBUG [RS:1;jenkins-hbase4:34241] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,656 DEBUG [RS:1;jenkins-hbase4:34241] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,656 INFO [RS:2;jenkins-hbase4:36311] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,656 DEBUG [RS:1;jenkins-hbase4:34241] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 22:15:51,656 INFO [RS:2;jenkins-hbase4:36311] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,657 DEBUG [RS:1;jenkins-hbase4:34241] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,657 INFO [RS:2;jenkins-hbase4:36311] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,657 DEBUG [RS:1;jenkins-hbase4:34241] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,657 DEBUG [RS:1;jenkins-hbase4:34241] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,657 DEBUG [RS:1;jenkins-hbase4:34241] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:51,658 INFO [RS:0;jenkins-hbase4:32813] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,659 INFO [RS:0;jenkins-hbase4:32813] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,659 INFO [RS:0;jenkins-hbase4:32813] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,661 INFO [RS:1;jenkins-hbase4:34241] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,661 INFO [RS:1;jenkins-hbase4:34241] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,661 INFO [RS:1;jenkins-hbase4:34241] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,670 INFO [RS:2;jenkins-hbase4:36311] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 22:15:51,670 INFO [RS:2;jenkins-hbase4:36311] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36311,1689632151335-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,672 INFO [RS:0;jenkins-hbase4:32813] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 22:15:51,672 INFO [RS:0;jenkins-hbase4:32813] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32813,1689632151260-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,674 INFO [RS:1;jenkins-hbase4:34241] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 22:15:51,674 INFO [RS:1;jenkins-hbase4:34241] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34241,1689632151301-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:51,687 INFO [RS:0;jenkins-hbase4:32813] regionserver.Replication(203): jenkins-hbase4.apache.org,32813,1689632151260 started 2023-07-17 22:15:51,687 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,32813,1689632151260, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:32813, sessionid=0x101755b19770001 2023-07-17 22:15:51,687 DEBUG [RS:0;jenkins-hbase4:32813] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 22:15:51,687 DEBUG [RS:0;jenkins-hbase4:32813] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,32813,1689632151260 2023-07-17 22:15:51,687 DEBUG [RS:0;jenkins-hbase4:32813] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32813,1689632151260' 2023-07-17 22:15:51,687 DEBUG [RS:0;jenkins-hbase4:32813] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 22:15:51,688 INFO [RS:2;jenkins-hbase4:36311] regionserver.Replication(203): jenkins-hbase4.apache.org,36311,1689632151335 started 2023-07-17 22:15:51,688 INFO [RS:2;jenkins-hbase4:36311] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36311,1689632151335, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36311, sessionid=0x101755b19770003 2023-07-17 22:15:51,688 DEBUG [RS:2;jenkins-hbase4:36311] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 22:15:51,688 DEBUG [RS:2;jenkins-hbase4:36311] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36311,1689632151335 2023-07-17 22:15:51,688 DEBUG [RS:2;jenkins-hbase4:36311] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36311,1689632151335' 2023-07-17 22:15:51,688 DEBUG [RS:2;jenkins-hbase4:36311] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 22:15:51,688 DEBUG [RS:0;jenkins-hbase4:32813] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 22:15:51,688 DEBUG [RS:0;jenkins-hbase4:32813] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 22:15:51,688 DEBUG [RS:2;jenkins-hbase4:36311] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 22:15:51,688 DEBUG [RS:0;jenkins-hbase4:32813] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 22:15:51,688 DEBUG [RS:0;jenkins-hbase4:32813] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,32813,1689632151260 2023-07-17 22:15:51,688 DEBUG [RS:0;jenkins-hbase4:32813] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32813,1689632151260' 2023-07-17 22:15:51,688 DEBUG [RS:0;jenkins-hbase4:32813] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 22:15:51,688 DEBUG [RS:2;jenkins-hbase4:36311] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 22:15:51,689 DEBUG [RS:2;jenkins-hbase4:36311] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 22:15:51,689 DEBUG [RS:2;jenkins-hbase4:36311] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36311,1689632151335 2023-07-17 22:15:51,689 DEBUG [RS:0;jenkins-hbase4:32813] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 22:15:51,689 DEBUG [RS:2;jenkins-hbase4:36311] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36311,1689632151335' 2023-07-17 22:15:51,689 DEBUG [RS:2;jenkins-hbase4:36311] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 22:15:51,689 DEBUG [RS:0;jenkins-hbase4:32813] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 22:15:51,689 INFO [RS:0;jenkins-hbase4:32813] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-17 22:15:51,689 INFO [RS:0;jenkins-hbase4:32813] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-17 22:15:51,689 DEBUG [RS:2;jenkins-hbase4:36311] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 22:15:51,689 INFO [RS:1;jenkins-hbase4:34241] regionserver.Replication(203): jenkins-hbase4.apache.org,34241,1689632151301 started 2023-07-17 22:15:51,689 INFO [RS:1;jenkins-hbase4:34241] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34241,1689632151301, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34241, sessionid=0x101755b19770002 2023-07-17 22:15:51,689 DEBUG [RS:1;jenkins-hbase4:34241] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 22:15:51,689 DEBUG [RS:1;jenkins-hbase4:34241] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34241,1689632151301 2023-07-17 22:15:51,689 DEBUG [RS:1;jenkins-hbase4:34241] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34241,1689632151301' 2023-07-17 22:15:51,689 DEBUG [RS:1;jenkins-hbase4:34241] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 22:15:51,689 DEBUG [RS:2;jenkins-hbase4:36311] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 22:15:51,690 INFO [RS:2;jenkins-hbase4:36311] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-17 22:15:51,690 INFO [RS:2;jenkins-hbase4:36311] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-17 22:15:51,690 DEBUG [RS:1;jenkins-hbase4:34241] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 22:15:51,690 DEBUG [RS:1;jenkins-hbase4:34241] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 22:15:51,690 DEBUG [RS:1;jenkins-hbase4:34241] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 22:15:51,690 DEBUG [RS:1;jenkins-hbase4:34241] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34241,1689632151301 2023-07-17 22:15:51,690 DEBUG [RS:1;jenkins-hbase4:34241] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34241,1689632151301' 2023-07-17 22:15:51,690 DEBUG [RS:1;jenkins-hbase4:34241] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 22:15:51,690 DEBUG [RS:1;jenkins-hbase4:34241] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 22:15:51,691 DEBUG [RS:1;jenkins-hbase4:34241] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 22:15:51,691 INFO [RS:1;jenkins-hbase4:34241] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-17 22:15:51,691 INFO [RS:1;jenkins-hbase4:34241] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-17 22:15:51,791 INFO [RS:0;jenkins-hbase4:32813] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32813%2C1689632151260, suffix=, logDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/WALs/jenkins-hbase4.apache.org,32813,1689632151260, archiveDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/oldWALs, maxLogs=32 2023-07-17 22:15:51,791 INFO [RS:2;jenkins-hbase4:36311] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36311%2C1689632151335, suffix=, logDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/WALs/jenkins-hbase4.apache.org,36311,1689632151335, archiveDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/oldWALs, maxLogs=32 2023-07-17 22:15:51,793 INFO [RS:1;jenkins-hbase4:34241] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34241%2C1689632151301, suffix=, logDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/WALs/jenkins-hbase4.apache.org,34241,1689632151301, archiveDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/oldWALs, maxLogs=32 2023-07-17 22:15:51,796 DEBUG [jenkins-hbase4:37449] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-17 22:15:51,796 DEBUG [jenkins-hbase4:37449] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:51,796 DEBUG [jenkins-hbase4:37449] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:51,797 DEBUG [jenkins-hbase4:37449] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:51,797 DEBUG [jenkins-hbase4:37449] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:51,797 DEBUG [jenkins-hbase4:37449] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:51,800 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34241,1689632151301, state=OPENING 2023-07-17 22:15:51,802 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-17 22:15:51,804 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:51,804 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34241,1689632151301}] 2023-07-17 22:15:51,804 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-17 22:15:51,841 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36521,DS-830fe05b-b6b7-46e5-b89f-17c9c35d0c0b,DISK] 2023-07-17 22:15:51,842 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36521,DS-830fe05b-b6b7-46e5-b89f-17c9c35d0c0b,DISK] 2023-07-17 22:15:51,842 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39967,DS-2c7673a8-6660-4cab-8d85-738db2370e2e,DISK] 2023-07-17 22:15:51,842 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37475,DS-bc130240-e495-4017-b698-2d0bde6f5868,DISK] 2023-07-17 22:15:51,842 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39967,DS-2c7673a8-6660-4cab-8d85-738db2370e2e,DISK] 2023-07-17 22:15:51,842 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37475,DS-bc130240-e495-4017-b698-2d0bde6f5868,DISK] 2023-07-17 22:15:51,843 WARN [ReadOnlyZKClient-127.0.0.1:53229@0x775834a5] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-17 22:15:51,843 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36521,DS-830fe05b-b6b7-46e5-b89f-17c9c35d0c0b,DISK] 2023-07-17 22:15:51,843 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37449,1689632151185] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 22:15:51,846 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37475,DS-bc130240-e495-4017-b698-2d0bde6f5868,DISK] 2023-07-17 22:15:51,846 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39967,DS-2c7673a8-6660-4cab-8d85-738db2370e2e,DISK] 2023-07-17 22:15:51,851 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40508, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 22:15:51,851 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34241] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:40508 deadline: 1689632211851, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,34241,1689632151301 2023-07-17 22:15:51,855 INFO [RS:1;jenkins-hbase4:34241] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/WALs/jenkins-hbase4.apache.org,34241,1689632151301/jenkins-hbase4.apache.org%2C34241%2C1689632151301.1689632151799 2023-07-17 22:15:51,855 INFO [RS:0;jenkins-hbase4:32813] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/WALs/jenkins-hbase4.apache.org,32813,1689632151260/jenkins-hbase4.apache.org%2C32813%2C1689632151260.1689632151799 2023-07-17 22:15:51,859 DEBUG [RS:1;jenkins-hbase4:34241] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36521,DS-830fe05b-b6b7-46e5-b89f-17c9c35d0c0b,DISK], DatanodeInfoWithStorage[127.0.0.1:37475,DS-bc130240-e495-4017-b698-2d0bde6f5868,DISK], DatanodeInfoWithStorage[127.0.0.1:39967,DS-2c7673a8-6660-4cab-8d85-738db2370e2e,DISK]] 2023-07-17 22:15:51,859 DEBUG [RS:0;jenkins-hbase4:32813] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36521,DS-830fe05b-b6b7-46e5-b89f-17c9c35d0c0b,DISK], DatanodeInfoWithStorage[127.0.0.1:37475,DS-bc130240-e495-4017-b698-2d0bde6f5868,DISK], DatanodeInfoWithStorage[127.0.0.1:39967,DS-2c7673a8-6660-4cab-8d85-738db2370e2e,DISK]] 2023-07-17 22:15:51,860 INFO [RS:2;jenkins-hbase4:36311] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/WALs/jenkins-hbase4.apache.org,36311,1689632151335/jenkins-hbase4.apache.org%2C36311%2C1689632151335.1689632151800 2023-07-17 22:15:51,860 DEBUG [RS:2;jenkins-hbase4:36311] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36521,DS-830fe05b-b6b7-46e5-b89f-17c9c35d0c0b,DISK], DatanodeInfoWithStorage[127.0.0.1:37475,DS-bc130240-e495-4017-b698-2d0bde6f5868,DISK], DatanodeInfoWithStorage[127.0.0.1:39967,DS-2c7673a8-6660-4cab-8d85-738db2370e2e,DISK]] 2023-07-17 22:15:51,959 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34241,1689632151301 2023-07-17 22:15:51,961 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 22:15:51,963 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40524, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 22:15:51,966 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-17 22:15:51,967 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 22:15:51,968 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34241%2C1689632151301.meta, suffix=.meta, logDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/WALs/jenkins-hbase4.apache.org,34241,1689632151301, archiveDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/oldWALs, maxLogs=32 2023-07-17 22:15:51,981 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39967,DS-2c7673a8-6660-4cab-8d85-738db2370e2e,DISK] 2023-07-17 22:15:51,981 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36521,DS-830fe05b-b6b7-46e5-b89f-17c9c35d0c0b,DISK] 2023-07-17 22:15:51,981 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37475,DS-bc130240-e495-4017-b698-2d0bde6f5868,DISK] 2023-07-17 22:15:51,984 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/WALs/jenkins-hbase4.apache.org,34241,1689632151301/jenkins-hbase4.apache.org%2C34241%2C1689632151301.meta.1689632151968.meta 2023-07-17 22:15:51,984 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39967,DS-2c7673a8-6660-4cab-8d85-738db2370e2e,DISK], DatanodeInfoWithStorage[127.0.0.1:36521,DS-830fe05b-b6b7-46e5-b89f-17c9c35d0c0b,DISK], DatanodeInfoWithStorage[127.0.0.1:37475,DS-bc130240-e495-4017-b698-2d0bde6f5868,DISK]] 2023-07-17 22:15:51,984 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:51,984 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-17 22:15:51,985 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-17 22:15:51,985 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-17 22:15:51,985 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-17 22:15:51,985 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:51,985 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-17 22:15:51,985 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-17 22:15:51,986 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-17 22:15:51,987 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/info 2023-07-17 22:15:51,987 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/info 2023-07-17 22:15:51,987 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-17 22:15:51,988 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:51,988 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-17 22:15:51,989 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/rep_barrier 2023-07-17 22:15:51,989 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/rep_barrier 2023-07-17 22:15:51,989 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-17 22:15:51,990 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:51,990 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-17 22:15:51,991 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/table 2023-07-17 22:15:51,991 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/table 2023-07-17 22:15:51,991 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-17 22:15:51,991 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:51,992 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740 2023-07-17 22:15:51,993 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740 2023-07-17 22:15:51,995 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-17 22:15:51,996 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-17 22:15:51,996 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9518122240, jitterRate=-0.11355578899383545}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-17 22:15:51,996 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-17 22:15:51,997 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689632151959 2023-07-17 22:15:52,001 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-17 22:15:52,002 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-17 22:15:52,002 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34241,1689632151301, state=OPEN 2023-07-17 22:15:52,003 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-17 22:15:52,003 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-17 22:15:52,005 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-17 22:15:52,005 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34241,1689632151301 in 199 msec 2023-07-17 22:15:52,006 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-17 22:15:52,006 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 362 msec 2023-07-17 22:15:52,008 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 477 msec 2023-07-17 22:15:52,008 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689632152008, completionTime=-1 2023-07-17 22:15:52,008 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-17 22:15:52,008 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-17 22:15:52,012 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-17 22:15:52,012 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689632212012 2023-07-17 22:15:52,012 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689632272012 2023-07-17 22:15:52,012 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-17 22:15:52,018 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37449,1689632151185-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:52,018 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37449,1689632151185-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:52,018 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37449,1689632151185-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:52,018 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:37449, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:52,018 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:52,018 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-17 22:15:52,018 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:52,021 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-17 22:15:52,022 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-17 22:15:52,025 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 22:15:52,025 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 22:15:52,027 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp/data/hbase/namespace/7f45af643b80ad8d1187de5cd9d7385c 2023-07-17 22:15:52,028 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp/data/hbase/namespace/7f45af643b80ad8d1187de5cd9d7385c empty. 2023-07-17 22:15:52,028 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp/data/hbase/namespace/7f45af643b80ad8d1187de5cd9d7385c 2023-07-17 22:15:52,028 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-17 22:15:52,042 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:52,043 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7f45af643b80ad8d1187de5cd9d7385c, NAME => 'hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp 2023-07-17 22:15:52,057 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:52,057 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 7f45af643b80ad8d1187de5cd9d7385c, disabling compactions & flushes 2023-07-17 22:15:52,057 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c. 2023-07-17 22:15:52,057 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c. 2023-07-17 22:15:52,057 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c. after waiting 0 ms 2023-07-17 22:15:52,057 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c. 2023-07-17 22:15:52,057 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c. 2023-07-17 22:15:52,057 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 7f45af643b80ad8d1187de5cd9d7385c: 2023-07-17 22:15:52,060 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 22:15:52,061 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689632152061"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632152061"}]},"ts":"1689632152061"} 2023-07-17 22:15:52,064 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 22:15:52,064 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 22:15:52,065 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632152064"}]},"ts":"1689632152064"} 2023-07-17 22:15:52,066 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-17 22:15:52,068 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:52,068 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:52,068 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:52,068 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:52,068 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:52,068 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7f45af643b80ad8d1187de5cd9d7385c, ASSIGN}] 2023-07-17 22:15:52,070 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7f45af643b80ad8d1187de5cd9d7385c, ASSIGN 2023-07-17 22:15:52,071 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=7f45af643b80ad8d1187de5cd9d7385c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36311,1689632151335; forceNewPlan=false, retain=false 2023-07-17 22:15:52,155 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37449,1689632151185] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:52,157 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37449,1689632151185] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-17 22:15:52,159 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 22:15:52,159 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 22:15:52,161 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp/data/hbase/rsgroup/25cb52fd09d0e74e3e55de2cb1287a63 2023-07-17 22:15:52,161 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp/data/hbase/rsgroup/25cb52fd09d0e74e3e55de2cb1287a63 empty. 2023-07-17 22:15:52,162 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp/data/hbase/rsgroup/25cb52fd09d0e74e3e55de2cb1287a63 2023-07-17 22:15:52,162 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-17 22:15:52,174 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:52,175 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 25cb52fd09d0e74e3e55de2cb1287a63, NAME => 'hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp 2023-07-17 22:15:52,183 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:52,183 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 25cb52fd09d0e74e3e55de2cb1287a63, disabling compactions & flushes 2023-07-17 22:15:52,184 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63. 2023-07-17 22:15:52,184 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63. 2023-07-17 22:15:52,184 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63. after waiting 0 ms 2023-07-17 22:15:52,184 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63. 2023-07-17 22:15:52,184 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63. 2023-07-17 22:15:52,184 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 25cb52fd09d0e74e3e55de2cb1287a63: 2023-07-17 22:15:52,186 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 22:15:52,187 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689632152186"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632152186"}]},"ts":"1689632152186"} 2023-07-17 22:15:52,188 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 22:15:52,189 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 22:15:52,189 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632152189"}]},"ts":"1689632152189"} 2023-07-17 22:15:52,190 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-17 22:15:52,199 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:52,199 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:52,199 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:52,199 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:52,199 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:52,199 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=25cb52fd09d0e74e3e55de2cb1287a63, ASSIGN}] 2023-07-17 22:15:52,200 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=25cb52fd09d0e74e3e55de2cb1287a63, ASSIGN 2023-07-17 22:15:52,201 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=25cb52fd09d0e74e3e55de2cb1287a63, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,32813,1689632151260; forceNewPlan=false, retain=false 2023-07-17 22:15:52,201 INFO [jenkins-hbase4:37449] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-17 22:15:52,203 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=7f45af643b80ad8d1187de5cd9d7385c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36311,1689632151335 2023-07-17 22:15:52,203 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689632152203"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632152203"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632152203"}]},"ts":"1689632152203"} 2023-07-17 22:15:52,203 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=25cb52fd09d0e74e3e55de2cb1287a63, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32813,1689632151260 2023-07-17 22:15:52,203 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689632152203"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632152203"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632152203"}]},"ts":"1689632152203"} 2023-07-17 22:15:52,204 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 7f45af643b80ad8d1187de5cd9d7385c, server=jenkins-hbase4.apache.org,36311,1689632151335}] 2023-07-17 22:15:52,205 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 25cb52fd09d0e74e3e55de2cb1287a63, server=jenkins-hbase4.apache.org,32813,1689632151260}] 2023-07-17 22:15:52,357 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36311,1689632151335 2023-07-17 22:15:52,357 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,32813,1689632151260 2023-07-17 22:15:52,357 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 22:15:52,358 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 22:15:52,359 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55296, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 22:15:52,359 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37198, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 22:15:52,363 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63. 2023-07-17 22:15:52,363 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 25cb52fd09d0e74e3e55de2cb1287a63, NAME => 'hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:52,364 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-17 22:15:52,364 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c. 2023-07-17 22:15:52,364 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63. service=MultiRowMutationService 2023-07-17 22:15:52,364 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7f45af643b80ad8d1187de5cd9d7385c, NAME => 'hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:52,364 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-17 22:15:52,364 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 25cb52fd09d0e74e3e55de2cb1287a63 2023-07-17 22:15:52,364 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 7f45af643b80ad8d1187de5cd9d7385c 2023-07-17 22:15:52,364 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:52,364 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:52,364 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 25cb52fd09d0e74e3e55de2cb1287a63 2023-07-17 22:15:52,364 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7f45af643b80ad8d1187de5cd9d7385c 2023-07-17 22:15:52,365 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 25cb52fd09d0e74e3e55de2cb1287a63 2023-07-17 22:15:52,365 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7f45af643b80ad8d1187de5cd9d7385c 2023-07-17 22:15:52,367 INFO [StoreOpener-7f45af643b80ad8d1187de5cd9d7385c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 7f45af643b80ad8d1187de5cd9d7385c 2023-07-17 22:15:52,368 DEBUG [StoreOpener-7f45af643b80ad8d1187de5cd9d7385c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/namespace/7f45af643b80ad8d1187de5cd9d7385c/info 2023-07-17 22:15:52,368 DEBUG [StoreOpener-7f45af643b80ad8d1187de5cd9d7385c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/namespace/7f45af643b80ad8d1187de5cd9d7385c/info 2023-07-17 22:15:52,368 INFO [StoreOpener-7f45af643b80ad8d1187de5cd9d7385c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7f45af643b80ad8d1187de5cd9d7385c columnFamilyName info 2023-07-17 22:15:52,369 INFO [StoreOpener-7f45af643b80ad8d1187de5cd9d7385c-1] regionserver.HStore(310): Store=7f45af643b80ad8d1187de5cd9d7385c/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:52,370 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/namespace/7f45af643b80ad8d1187de5cd9d7385c 2023-07-17 22:15:52,370 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/namespace/7f45af643b80ad8d1187de5cd9d7385c 2023-07-17 22:15:52,373 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7f45af643b80ad8d1187de5cd9d7385c 2023-07-17 22:15:52,378 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/namespace/7f45af643b80ad8d1187de5cd9d7385c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:52,379 INFO [StoreOpener-25cb52fd09d0e74e3e55de2cb1287a63-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 25cb52fd09d0e74e3e55de2cb1287a63 2023-07-17 22:15:52,379 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7f45af643b80ad8d1187de5cd9d7385c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11837561280, jitterRate=0.10245880484580994}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:52,379 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7f45af643b80ad8d1187de5cd9d7385c: 2023-07-17 22:15:52,380 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c., pid=8, masterSystemTime=1689632152357 2023-07-17 22:15:52,380 DEBUG [StoreOpener-25cb52fd09d0e74e3e55de2cb1287a63-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/rsgroup/25cb52fd09d0e74e3e55de2cb1287a63/m 2023-07-17 22:15:52,380 DEBUG [StoreOpener-25cb52fd09d0e74e3e55de2cb1287a63-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/rsgroup/25cb52fd09d0e74e3e55de2cb1287a63/m 2023-07-17 22:15:52,382 INFO [StoreOpener-25cb52fd09d0e74e3e55de2cb1287a63-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 25cb52fd09d0e74e3e55de2cb1287a63 columnFamilyName m 2023-07-17 22:15:52,383 INFO [StoreOpener-25cb52fd09d0e74e3e55de2cb1287a63-1] regionserver.HStore(310): Store=25cb52fd09d0e74e3e55de2cb1287a63/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:52,383 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c. 2023-07-17 22:15:52,384 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c. 2023-07-17 22:15:52,384 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/rsgroup/25cb52fd09d0e74e3e55de2cb1287a63 2023-07-17 22:15:52,384 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=7f45af643b80ad8d1187de5cd9d7385c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36311,1689632151335 2023-07-17 22:15:52,384 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689632152384"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632152384"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632152384"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632152384"}]},"ts":"1689632152384"} 2023-07-17 22:15:52,385 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/rsgroup/25cb52fd09d0e74e3e55de2cb1287a63 2023-07-17 22:15:52,387 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-17 22:15:52,388 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 7f45af643b80ad8d1187de5cd9d7385c, server=jenkins-hbase4.apache.org,36311,1689632151335 in 182 msec 2023-07-17 22:15:52,389 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 25cb52fd09d0e74e3e55de2cb1287a63 2023-07-17 22:15:52,389 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-17 22:15:52,389 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=7f45af643b80ad8d1187de5cd9d7385c, ASSIGN in 320 msec 2023-07-17 22:15:52,390 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 22:15:52,390 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632152390"}]},"ts":"1689632152390"} 2023-07-17 22:15:52,392 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-17 22:15:52,393 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/rsgroup/25cb52fd09d0e74e3e55de2cb1287a63/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:52,394 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 25cb52fd09d0e74e3e55de2cb1287a63; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@6630277f, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:52,394 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 25cb52fd09d0e74e3e55de2cb1287a63: 2023-07-17 22:15:52,394 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63., pid=9, masterSystemTime=1689632152357 2023-07-17 22:15:52,397 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 22:15:52,397 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63. 2023-07-17 22:15:52,398 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63. 2023-07-17 22:15:52,398 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=25cb52fd09d0e74e3e55de2cb1287a63, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,32813,1689632151260 2023-07-17 22:15:52,399 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689632152398"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632152398"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632152398"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632152398"}]},"ts":"1689632152398"} 2023-07-17 22:15:52,399 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 379 msec 2023-07-17 22:15:52,401 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-17 22:15:52,401 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 25cb52fd09d0e74e3e55de2cb1287a63, server=jenkins-hbase4.apache.org,32813,1689632151260 in 195 msec 2023-07-17 22:15:52,402 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-17 22:15:52,403 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=25cb52fd09d0e74e3e55de2cb1287a63, ASSIGN in 202 msec 2023-07-17 22:15:52,403 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 22:15:52,403 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632152403"}]},"ts":"1689632152403"} 2023-07-17 22:15:52,404 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-17 22:15:52,406 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 22:15:52,407 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 251 msec 2023-07-17 22:15:52,422 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-17 22:15:52,423 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-17 22:15:52,423 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:52,426 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 22:15:52,428 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55310, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 22:15:52,430 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-17 22:15:52,439 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 22:15:52,442 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-07-17 22:15:52,451 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-17 22:15:52,457 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 22:15:52,459 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37449,1689632151185] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 22:15:52,460 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 8 msec 2023-07-17 22:15:52,461 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37214, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 22:15:52,463 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37449,1689632151185] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-17 22:15:52,463 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37449,1689632151185] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-17 22:15:52,470 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-17 22:15:52,472 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-17 22:15:52,472 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.067sec 2023-07-17 22:15:52,472 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-17 22:15:52,472 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-17 22:15:52,472 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-17 22:15:52,472 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37449,1689632151185-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-17 22:15:52,472 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37449,1689632151185-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-17 22:15:52,473 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-17 22:15:52,476 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:52,476 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37449,1689632151185] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:52,477 DEBUG [Listener at localhost/44229] zookeeper.ReadOnlyZKClient(139): Connect 0x1ecadc90 to 127.0.0.1:53229 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:52,479 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37449,1689632151185] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-17 22:15:52,480 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37449,1689632151185] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-17 22:15:52,486 DEBUG [Listener at localhost/44229] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@52be585, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:52,487 DEBUG [hconnection-0x51656034-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 22:15:52,490 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40536, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 22:15:52,492 INFO [Listener at localhost/44229] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,37449,1689632151185 2023-07-17 22:15:52,492 INFO [Listener at localhost/44229] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:52,494 DEBUG [Listener at localhost/44229] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-17 22:15:52,496 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54792, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-17 22:15:52,501 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-17 22:15:52,501 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:52,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-17 22:15:52,502 DEBUG [Listener at localhost/44229] zookeeper.ReadOnlyZKClient(139): Connect 0x39e05e31 to 127.0.0.1:53229 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:52,512 DEBUG [Listener at localhost/44229] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@437efdf9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:52,513 INFO [Listener at localhost/44229] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:53229 2023-07-17 22:15:52,517 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 22:15:52,519 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101755b1977000a connected 2023-07-17 22:15:52,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:52,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:52,525 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-17 22:15:52,536 INFO [Listener at localhost/44229] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 22:15:52,536 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:52,536 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:52,536 INFO [Listener at localhost/44229] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 22:15:52,537 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 22:15:52,537 INFO [Listener at localhost/44229] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 22:15:52,537 INFO [Listener at localhost/44229] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 22:15:52,537 INFO [Listener at localhost/44229] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33873 2023-07-17 22:15:52,538 INFO [Listener at localhost/44229] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 22:15:52,539 DEBUG [Listener at localhost/44229] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 22:15:52,539 INFO [Listener at localhost/44229] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:52,540 INFO [Listener at localhost/44229] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 22:15:52,541 INFO [Listener at localhost/44229] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33873 connecting to ZooKeeper ensemble=127.0.0.1:53229 2023-07-17 22:15:52,545 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:338730x0, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 22:15:52,546 DEBUG [Listener at localhost/44229] zookeeper.ZKUtil(162): regionserver:338730x0, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-17 22:15:52,546 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33873-0x101755b1977000b connected 2023-07-17 22:15:52,547 DEBUG [Listener at localhost/44229] zookeeper.ZKUtil(162): regionserver:33873-0x101755b1977000b, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-17 22:15:52,548 DEBUG [Listener at localhost/44229] zookeeper.ZKUtil(164): regionserver:33873-0x101755b1977000b, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 22:15:52,548 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33873 2023-07-17 22:15:52,548 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33873 2023-07-17 22:15:52,548 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33873 2023-07-17 22:15:52,551 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33873 2023-07-17 22:15:52,553 DEBUG [Listener at localhost/44229] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33873 2023-07-17 22:15:52,554 INFO [Listener at localhost/44229] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 22:15:52,554 INFO [Listener at localhost/44229] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 22:15:52,554 INFO [Listener at localhost/44229] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 22:15:52,555 INFO [Listener at localhost/44229] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 22:15:52,555 INFO [Listener at localhost/44229] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 22:15:52,555 INFO [Listener at localhost/44229] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 22:15:52,555 INFO [Listener at localhost/44229] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 22:15:52,555 INFO [Listener at localhost/44229] http.HttpServer(1146): Jetty bound to port 43027 2023-07-17 22:15:52,556 INFO [Listener at localhost/44229] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 22:15:52,558 INFO [Listener at localhost/44229] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:52,558 INFO [Listener at localhost/44229] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7eac29c2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/hadoop.log.dir/,AVAILABLE} 2023-07-17 22:15:52,558 INFO [Listener at localhost/44229] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:52,559 INFO [Listener at localhost/44229] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@65d032f9{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 22:15:52,564 INFO [Listener at localhost/44229] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 22:15:52,564 INFO [Listener at localhost/44229] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 22:15:52,565 INFO [Listener at localhost/44229] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 22:15:52,565 INFO [Listener at localhost/44229] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-17 22:15:52,567 INFO [Listener at localhost/44229] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 22:15:52,567 INFO [Listener at localhost/44229] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@307a6dc1{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:52,569 INFO [Listener at localhost/44229] server.AbstractConnector(333): Started ServerConnector@6ac5b445{HTTP/1.1, (http/1.1)}{0.0.0.0:43027} 2023-07-17 22:15:52,569 INFO [Listener at localhost/44229] server.Server(415): Started @42842ms 2023-07-17 22:15:52,571 INFO [RS:3;jenkins-hbase4:33873] regionserver.HRegionServer(951): ClusterId : 5007c51a-c4c6-4afd-9880-cacc55550013 2023-07-17 22:15:52,571 DEBUG [RS:3;jenkins-hbase4:33873] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 22:15:52,573 DEBUG [RS:3;jenkins-hbase4:33873] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 22:15:52,573 DEBUG [RS:3;jenkins-hbase4:33873] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 22:15:52,575 DEBUG [RS:3;jenkins-hbase4:33873] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 22:15:52,577 DEBUG [RS:3;jenkins-hbase4:33873] zookeeper.ReadOnlyZKClient(139): Connect 0x2cccd12a to 127.0.0.1:53229 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 22:15:52,582 DEBUG [RS:3;jenkins-hbase4:33873] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@220b3ac3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 22:15:52,582 DEBUG [RS:3;jenkins-hbase4:33873] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2634ba5a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 22:15:52,590 DEBUG [RS:3;jenkins-hbase4:33873] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:33873 2023-07-17 22:15:52,590 INFO [RS:3;jenkins-hbase4:33873] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 22:15:52,590 INFO [RS:3;jenkins-hbase4:33873] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 22:15:52,590 DEBUG [RS:3;jenkins-hbase4:33873] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 22:15:52,591 INFO [RS:3;jenkins-hbase4:33873] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37449,1689632151185 with isa=jenkins-hbase4.apache.org/172.31.14.131:33873, startcode=1689632152536 2023-07-17 22:15:52,591 DEBUG [RS:3;jenkins-hbase4:33873] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 22:15:52,593 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45007, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 22:15:52,594 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37449] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33873,1689632152536 2023-07-17 22:15:52,594 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37449,1689632151185] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 22:15:52,594 DEBUG [RS:3;jenkins-hbase4:33873] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb 2023-07-17 22:15:52,594 DEBUG [RS:3;jenkins-hbase4:33873] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46771 2023-07-17 22:15:52,594 DEBUG [RS:3;jenkins-hbase4:33873] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41999 2023-07-17 22:15:52,599 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:52,599 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:52,599 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37449,1689632151185] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:52,599 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:52,599 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:52,599 DEBUG [RS:3;jenkins-hbase4:33873] zookeeper.ZKUtil(162): regionserver:33873-0x101755b1977000b, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33873,1689632152536 2023-07-17 22:15:52,599 WARN [RS:3;jenkins-hbase4:33873] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 22:15:52,599 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33873,1689632152536] 2023-07-17 22:15:52,599 INFO [RS:3;jenkins-hbase4:33873] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 22:15:52,599 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37449,1689632151185] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-17 22:15:52,599 DEBUG [RS:3;jenkins-hbase4:33873] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/WALs/jenkins-hbase4.apache.org,33873,1689632152536 2023-07-17 22:15:52,599 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32813,1689632151260 2023-07-17 22:15:52,603 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33873,1689632152536 2023-07-17 22:15:52,603 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37449,1689632151185] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-17 22:15:52,603 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32813,1689632151260 2023-07-17 22:15:52,603 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32813,1689632151260 2023-07-17 22:15:52,604 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36311,1689632151335 2023-07-17 22:15:52,604 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33873,1689632152536 2023-07-17 22:15:52,605 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34241,1689632151301 2023-07-17 22:15:52,605 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33873,1689632152536 2023-07-17 22:15:52,605 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36311,1689632151335 2023-07-17 22:15:52,606 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36311,1689632151335 2023-07-17 22:15:52,606 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34241,1689632151301 2023-07-17 22:15:52,606 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34241,1689632151301 2023-07-17 22:15:52,606 DEBUG [RS:3;jenkins-hbase4:33873] zookeeper.ZKUtil(162): regionserver:33873-0x101755b1977000b, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32813,1689632151260 2023-07-17 22:15:52,606 DEBUG [RS:3;jenkins-hbase4:33873] zookeeper.ZKUtil(162): regionserver:33873-0x101755b1977000b, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33873,1689632152536 2023-07-17 22:15:52,607 DEBUG [RS:3;jenkins-hbase4:33873] zookeeper.ZKUtil(162): regionserver:33873-0x101755b1977000b, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36311,1689632151335 2023-07-17 22:15:52,607 DEBUG [RS:3;jenkins-hbase4:33873] zookeeper.ZKUtil(162): regionserver:33873-0x101755b1977000b, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34241,1689632151301 2023-07-17 22:15:52,608 DEBUG [RS:3;jenkins-hbase4:33873] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 22:15:52,608 INFO [RS:3;jenkins-hbase4:33873] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 22:15:52,609 INFO [RS:3;jenkins-hbase4:33873] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 22:15:52,611 INFO [RS:3;jenkins-hbase4:33873] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 22:15:52,611 INFO [RS:3;jenkins-hbase4:33873] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:52,614 INFO [RS:3;jenkins-hbase4:33873] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 22:15:52,615 INFO [RS:3;jenkins-hbase4:33873] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:52,616 DEBUG [RS:3;jenkins-hbase4:33873] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:52,616 DEBUG [RS:3;jenkins-hbase4:33873] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:52,616 DEBUG [RS:3;jenkins-hbase4:33873] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:52,616 DEBUG [RS:3;jenkins-hbase4:33873] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:52,616 DEBUG [RS:3;jenkins-hbase4:33873] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:52,616 DEBUG [RS:3;jenkins-hbase4:33873] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 22:15:52,616 DEBUG [RS:3;jenkins-hbase4:33873] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:52,616 DEBUG [RS:3;jenkins-hbase4:33873] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:52,616 DEBUG [RS:3;jenkins-hbase4:33873] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:52,616 DEBUG [RS:3;jenkins-hbase4:33873] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 22:15:52,619 INFO [RS:3;jenkins-hbase4:33873] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:52,619 INFO [RS:3;jenkins-hbase4:33873] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:52,619 INFO [RS:3;jenkins-hbase4:33873] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:52,630 INFO [RS:3;jenkins-hbase4:33873] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 22:15:52,630 INFO [RS:3;jenkins-hbase4:33873] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33873,1689632152536-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 22:15:52,640 INFO [RS:3;jenkins-hbase4:33873] regionserver.Replication(203): jenkins-hbase4.apache.org,33873,1689632152536 started 2023-07-17 22:15:52,640 INFO [RS:3;jenkins-hbase4:33873] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33873,1689632152536, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33873, sessionid=0x101755b1977000b 2023-07-17 22:15:52,640 DEBUG [RS:3;jenkins-hbase4:33873] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 22:15:52,640 DEBUG [RS:3;jenkins-hbase4:33873] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33873,1689632152536 2023-07-17 22:15:52,640 DEBUG [RS:3;jenkins-hbase4:33873] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33873,1689632152536' 2023-07-17 22:15:52,641 DEBUG [RS:3;jenkins-hbase4:33873] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 22:15:52,641 DEBUG [RS:3;jenkins-hbase4:33873] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 22:15:52,641 DEBUG [RS:3;jenkins-hbase4:33873] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 22:15:52,641 DEBUG [RS:3;jenkins-hbase4:33873] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 22:15:52,641 DEBUG [RS:3;jenkins-hbase4:33873] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33873,1689632152536 2023-07-17 22:15:52,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:52,641 DEBUG [RS:3;jenkins-hbase4:33873] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33873,1689632152536' 2023-07-17 22:15:52,641 DEBUG [RS:3;jenkins-hbase4:33873] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 22:15:52,642 DEBUG [RS:3;jenkins-hbase4:33873] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 22:15:52,642 DEBUG [RS:3;jenkins-hbase4:33873] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 22:15:52,642 INFO [RS:3;jenkins-hbase4:33873] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-17 22:15:52,642 INFO [RS:3;jenkins-hbase4:33873] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-17 22:15:52,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:52,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:52,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:52,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:52,648 DEBUG [hconnection-0x372c6408-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 22:15:52,650 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40540, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 22:15:52,654 DEBUG [hconnection-0x372c6408-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 22:15:52,655 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37218, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 22:15:52,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:52,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:52,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37449] to rsgroup master 2023-07-17 22:15:52,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:52,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:54792 deadline: 1689633352659, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. 2023-07-17 22:15:52,660 WARN [Listener at localhost/44229] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:52,661 INFO [Listener at localhost/44229] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:52,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:52,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:52,661 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32813, jenkins-hbase4.apache.org:33873, jenkins-hbase4.apache.org:34241, jenkins-hbase4.apache.org:36311], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:52,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:52,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:52,724 INFO [Listener at localhost/44229] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=564 (was 512) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33873 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:46771 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x7bee2bd3-SendThread(127.0.0.1:53229) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1070776173_17 at /127.0.0.1:51220 [Receiving block BP-715164769-172.31.14.131-1689632150312:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: Listener at localhost/44229-SendThread(127.0.0.1:53229) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x372c6408-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: globalEventExecutor-1-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) io.netty.util.concurrent.GlobalEventExecutor.takeTask(GlobalEventExecutor.java:95) io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:239) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: BP-715164769-172.31.14.131-1689632150312 heartbeating to localhost/127.0.0.1:46771 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:36311Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1182643791@qtp-1774399312-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: hconnection-0x509c2034-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/dfs/data/data4/current/BP-715164769-172.31.14.131-1689632150312 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1616589671_17 at /127.0.0.1:50836 [Receiving block BP-715164769-172.31.14.131-1689632150312:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=32813 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@1ba4dc3f[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-715164769-172.31.14.131-1689632150312:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:33873Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2016723980-2591 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@6bcaac1 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-715164769-172.31.14.131-1689632150312:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp488724519-2320 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1551935033-2217 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 43153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp488724519-2325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:46771 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x775834a5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1101731022.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x372c6408-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=32813 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: CacheReplicationMonitor(1745884256) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: 474283062@qtp-1021376429-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39439 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-715164769-172.31.14.131-1689632150312:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1551935033-2219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@74ebeda4 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:41705 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 43153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@1a96ce03 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1616589671_17 at /127.0.0.1:51248 [Receiving block BP-715164769-172.31.14.131-1689632150312:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 36903 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-715164769-172.31.14.131-1689632150312:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp703712713-2249-acceptor-0@327f8311-ServerConnector@879cd83{HTTP/1.1, (http/1.1)}{0.0.0.0:35627} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-5210a738-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1539435759-2282 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1551935033-2218-acceptor-0@4046303e-ServerConnector@43a47f7f{HTTP/1.1, (http/1.1)}{0.0.0.0:41999} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-715164769-172.31.14.131-1689632150312:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-715164769-172.31.14.131-1689632150312:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46299,1689632146159 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: qtp2016723980-2587 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp488724519-2324-acceptor-0@1e2fc385-ServerConnector@13e6a5f3{HTTP/1.1, (http/1.1)}{0.0.0.0:34603} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:36311 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1539435759-2281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server idle connection scanner for port 36903 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 3 on default port 46771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-2a0d1eb2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1342701824_17 at /127.0.0.1:51266 [Receiving block BP-715164769-172.31.14.131-1689632150312:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1551935033-2223 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x1ecadc90 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1101731022.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_970222874_17 at /127.0.0.1:51250 [Receiving block BP-715164769-172.31.14.131-1689632150312:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@788f0762 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1551935033-2220 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1208021378-2312 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44229-SendThread(127.0.0.1:53229) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp703712713-2252 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1539435759-2285 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1208021378-2313 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x7bee2bd3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1101731022.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp703712713-2254 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp703712713-2255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (548927496) connection to localhost/127.0.0.1:46771 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1342701824_17 at /127.0.0.1:51316 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x509c2034-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/44229-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp703712713-2251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-715164769-172.31.14.131-1689632150312:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-715164769-172.31.14.131-1689632150312:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-715164769-172.31.14.131-1689632150312:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1539435759-2280 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1551935033-2222 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44229.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=33873 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp703712713-2250 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44229-SendThread(127.0.0.1:53229) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 0 on default port 36903 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=33873 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/44229 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33873 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2016723980-2592 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 46771 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52793@0x41f5b42e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1101731022.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1539435759-2283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x2cccd12a sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1101731022.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44229-SendThread(127.0.0.1:53229) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 3 on default port 44229 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp703712713-2253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=32813 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/44229-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp488724519-2326 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:33873 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44229-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS:1;jenkins-hbase4:34241 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x2d28b3ea-SendThread(127.0.0.1:53229) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: 836069465@qtp-2083601834-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39493 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52793@0x41f5b42e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/44229-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_970222874_17 at /127.0.0.1:38318 [Receiving block BP-715164769-172.31.14.131-1689632150312:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 43153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x7bee2bd3-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/44229-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=33873 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1198843625@qtp-2083601834-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: LeaseRenewer:jenkins@localhost:46771 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32813 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@282c4d02 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x2cccd12a-SendThread(127.0.0.1:53229) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x1ecadc90-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x509c2034-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=33873 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 44229 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x775834a5-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: BP-715164769-172.31.14.131-1689632150312 heartbeating to localhost/127.0.0.1:46771 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (548927496) connection to localhost/127.0.0.1:41705 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (548927496) connection to localhost/127.0.0.1:41705 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: jenkins-hbase4:34241Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689632151552 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1070776173_17 at /127.0.0.1:38266 [Receiving block BP-715164769-172.31.14.131-1689632150312:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/dfs/data/data5/current/BP-715164769-172.31.14.131-1689632150312 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=32813 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@5b7c23da[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1208021378-2315 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (548927496) connection to localhost/127.0.0.1:41705 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2016723980-2594 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb-prefix:jenkins-hbase4.apache.org,32813,1689632151260 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@68a050ef java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Client (548927496) connection to localhost/127.0.0.1:46771 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x1ecadc90-SendThread(127.0.0.1:53229) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:53229 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: IPC Client (548927496) connection to localhost/127.0.0.1:46771 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/dfs/data/data2/current/BP-715164769-172.31.14.131-1689632150312 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:32813 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1208021378-2310 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@4e01afcf[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-221f8d96-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@4e604806 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:33873-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-715164769-172.31.14.131-1689632150312:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: hconnection-0x509c2034-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb-prefix:jenkins-hbase4.apache.org,36311,1689632151335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:34241-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:41705 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:41705 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1208021378-2308 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 44229 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: jenkins-hbase4:32813Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 373026934@qtp-1021376429-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@398f58e0 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/MasterData-prefix:jenkins-hbase4.apache.org,37449,1689632151185 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 43153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@2b5d8c66 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 43153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x2d28b3ea-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1551935033-2224 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 44229 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=32813 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp488724519-2321 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1931429220@qtp-1430883268-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: hconnection-0x509c2034-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1616589671_17 at /127.0.0.1:50732 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:37449 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44229.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x39e05e31-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-715164769-172.31.14.131-1689632150312:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/dfs/data/data6/current/BP-715164769-172.31.14.131-1689632150312 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x77869323-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1539435759-2278 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 36903 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp2016723980-2590 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 44229 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 36903 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb-prefix:jenkins-hbase4.apache.org,34241,1689632151301 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_970222874_17 at /127.0.0.1:50820 [Receiving block BP-715164769-172.31.14.131-1689632150312:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins@localhost:41705 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-7d87758a-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x509c2034-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1060531687@qtp-1774399312-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44077 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=32813 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-715164769-172.31.14.131-1689632150312:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-715164769-172.31.14.131-1689632150312:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (548927496) connection to localhost/127.0.0.1:46771 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp2016723980-2588-acceptor-0@49d184ca-ServerConnector@6ac5b445{HTTP/1.1, (http/1.1)}{0.0.0.0:43027} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44229-SendThread(127.0.0.1:53229) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33873 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1342701824_17 at /127.0.0.1:50850 [Receiving block BP-715164769-172.31.14.131-1689632150312:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x775834a5-SendThread(127.0.0.1:53229) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp488724519-2319 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1070776173_17 at /127.0.0.1:50786 [Receiving block BP-715164769-172.31.14.131-1689632150312:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_970222874_17 at /127.0.0.1:38294 [Receiving block BP-715164769-172.31.14.131-1689632150312:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44229.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x77869323 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1101731022.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:32813-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44229.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x39e05e31 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1101731022.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 36903 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1539435759-2279-acceptor-0@738b57c7-ServerConnector@5bd01aa{HTTP/1.1, (http/1.1)}{0.0.0.0:46383} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33873 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-715164769-172.31.14.131-1689632150312:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 46771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS:2;jenkins-hbase4:36311-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:53229): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: qtp488724519-2322 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x2cccd12a-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1208021378-2311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2016723980-2589 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_970222874_17 at /127.0.0.1:50864 [Receiving block BP-715164769-172.31.14.131-1689632150312:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x509c2034-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb-prefix:jenkins-hbase4.apache.org,34241,1689632151301.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/dfs/data/data1/current/BP-715164769-172.31.14.131-1689632150312 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/dfs/data/data3/current/BP-715164769-172.31.14.131-1689632150312 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 274828052@qtp-1430883268-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44843 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_970222874_17 at /127.0.0.1:51268 [Receiving block BP-715164769-172.31.14.131-1689632150312:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@4effafd7 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x39e05e31-SendThread(127.0.0.1:53229) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: region-location-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-715164769-172.31.14.131-1689632150312 heartbeating to localhost/127.0.0.1:46771 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@343a543b java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (548927496) connection to localhost/127.0.0.1:46771 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@535b45bf java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42151-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x2d28b3ea sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1101731022.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 44229 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 4 on default port 46771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33873 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1616589671_17 at /127.0.0.1:38296 [Receiving block BP-715164769-172.31.14.131-1689632150312:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53229@0x77869323-SendThread(127.0.0.1:53229) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 43153 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1208021378-2314 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (548927496) connection to localhost/127.0.0.1:41705 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@1da00a90 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-715164769-172.31.14.131-1689632150312:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp703712713-2248 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 46771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x51656034-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44229-SendThread(127.0.0.1:53229) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Session-HouseKeeper-2e984dbf-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1342701824_17 at /127.0.0.1:38302 [Receiving block BP-715164769-172.31.14.131-1689632150312:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44229-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1070776173_17 at /127.0.0.1:38228 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=32813 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 46771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (548927496) connection to localhost/127.0.0.1:41705 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=33873 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:46771 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=32813 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (548927496) connection to localhost/127.0.0.1:46771 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1539435759-2284 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1208021378-2309-acceptor-0@7ba7e039-ServerConnector@85cae1c{HTTP/1.1, (http/1.1)}{0.0.0.0:36935} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42151-SendThread(127.0.0.1:52793) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: qtp2016723980-2593 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x509c2034-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52793@0x41f5b42e-SendThread(127.0.0.1:52793) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37449,1689632151185 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689632151552 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: qtp1551935033-2221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp488724519-2323 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=865 (was 781) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=422 (was 417) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=2732 (was 2980) 2023-07-17 22:15:52,727 WARN [Listener at localhost/44229] hbase.ResourceChecker(130): Thread=564 is superior to 500 2023-07-17 22:15:52,744 INFO [RS:3;jenkins-hbase4:33873] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33873%2C1689632152536, suffix=, logDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/WALs/jenkins-hbase4.apache.org,33873,1689632152536, archiveDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/oldWALs, maxLogs=32 2023-07-17 22:15:52,746 INFO [Listener at localhost/44229] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=564, OpenFileDescriptor=865, MaxFileDescriptor=60000, SystemLoadAverage=422, ProcessCount=172, AvailableMemoryMB=2731 2023-07-17 22:15:52,746 WARN [Listener at localhost/44229] hbase.ResourceChecker(130): Thread=564 is superior to 500 2023-07-17 22:15:52,746 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-17 22:15:52,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:52,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:52,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:52,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:52,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:52,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:52,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:52,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:52,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:52,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:52,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:52,762 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:52,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:52,764 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36521,DS-830fe05b-b6b7-46e5-b89f-17c9c35d0c0b,DISK] 2023-07-17 22:15:52,766 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39967,DS-2c7673a8-6660-4cab-8d85-738db2370e2e,DISK] 2023-07-17 22:15:52,766 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37475,DS-bc130240-e495-4017-b698-2d0bde6f5868,DISK] 2023-07-17 22:15:52,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:52,771 INFO [RS:3;jenkins-hbase4:33873] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/WALs/jenkins-hbase4.apache.org,33873,1689632152536/jenkins-hbase4.apache.org%2C33873%2C1689632152536.1689632152744 2023-07-17 22:15:52,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:52,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:52,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:52,774 DEBUG [RS:3;jenkins-hbase4:33873] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36521,DS-830fe05b-b6b7-46e5-b89f-17c9c35d0c0b,DISK], DatanodeInfoWithStorage[127.0.0.1:39967,DS-2c7673a8-6660-4cab-8d85-738db2370e2e,DISK], DatanodeInfoWithStorage[127.0.0.1:37475,DS-bc130240-e495-4017-b698-2d0bde6f5868,DISK]] 2023-07-17 22:15:52,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:52,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:52,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37449] to rsgroup master 2023-07-17 22:15:52,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:52,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:54792 deadline: 1689633352779, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. 2023-07-17 22:15:52,779 WARN [Listener at localhost/44229] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:52,781 INFO [Listener at localhost/44229] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:52,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:52,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:52,782 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32813, jenkins-hbase4.apache.org:33873, jenkins-hbase4.apache.org:34241, jenkins-hbase4.apache.org:36311], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:52,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:52,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:52,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:52,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-17 22:15:52,787 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 22:15:52,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-17 22:15:52,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-17 22:15:52,789 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:52,789 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:52,790 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:52,791 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 22:15:52,793 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp/data/default/t1/c5f90b9a4a5724d07f691ee950ac7eaf 2023-07-17 22:15:52,794 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp/data/default/t1/c5f90b9a4a5724d07f691ee950ac7eaf empty. 2023-07-17 22:15:52,794 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp/data/default/t1/c5f90b9a4a5724d07f691ee950ac7eaf 2023-07-17 22:15:52,794 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-17 22:15:52,808 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-17 22:15:52,809 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => c5f90b9a4a5724d07f691ee950ac7eaf, NAME => 't1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp 2023-07-17 22:15:52,821 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:52,821 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing c5f90b9a4a5724d07f691ee950ac7eaf, disabling compactions & flushes 2023-07-17 22:15:52,821 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf. 2023-07-17 22:15:52,822 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf. 2023-07-17 22:15:52,822 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf. after waiting 0 ms 2023-07-17 22:15:52,822 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf. 2023-07-17 22:15:52,822 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf. 2023-07-17 22:15:52,822 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for c5f90b9a4a5724d07f691ee950ac7eaf: 2023-07-17 22:15:52,824 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 22:15:52,825 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689632152824"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632152824"}]},"ts":"1689632152824"} 2023-07-17 22:15:52,826 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 22:15:52,826 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 22:15:52,827 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632152826"}]},"ts":"1689632152826"} 2023-07-17 22:15:52,828 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-17 22:15:52,831 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 22:15:52,831 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 22:15:52,831 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 22:15:52,831 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 22:15:52,831 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-17 22:15:52,831 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 22:15:52,831 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=c5f90b9a4a5724d07f691ee950ac7eaf, ASSIGN}] 2023-07-17 22:15:52,832 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=c5f90b9a4a5724d07f691ee950ac7eaf, ASSIGN 2023-07-17 22:15:52,833 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=c5f90b9a4a5724d07f691ee950ac7eaf, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33873,1689632152536; forceNewPlan=false, retain=false 2023-07-17 22:15:52,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-17 22:15:52,983 INFO [jenkins-hbase4:37449] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 22:15:52,984 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=c5f90b9a4a5724d07f691ee950ac7eaf, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33873,1689632152536 2023-07-17 22:15:52,985 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689632152984"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632152984"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632152984"}]},"ts":"1689632152984"} 2023-07-17 22:15:52,986 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure c5f90b9a4a5724d07f691ee950ac7eaf, server=jenkins-hbase4.apache.org,33873,1689632152536}] 2023-07-17 22:15:53,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-17 22:15:53,138 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33873,1689632152536 2023-07-17 22:15:53,138 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 22:15:53,139 INFO [RS-EventLoopGroup-16-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37314, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 22:15:53,143 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf. 2023-07-17 22:15:53,143 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c5f90b9a4a5724d07f691ee950ac7eaf, NAME => 't1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf.', STARTKEY => '', ENDKEY => ''} 2023-07-17 22:15:53,143 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 c5f90b9a4a5724d07f691ee950ac7eaf 2023-07-17 22:15:53,143 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 22:15:53,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c5f90b9a4a5724d07f691ee950ac7eaf 2023-07-17 22:15:53,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c5f90b9a4a5724d07f691ee950ac7eaf 2023-07-17 22:15:53,145 INFO [StoreOpener-c5f90b9a4a5724d07f691ee950ac7eaf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region c5f90b9a4a5724d07f691ee950ac7eaf 2023-07-17 22:15:53,146 DEBUG [StoreOpener-c5f90b9a4a5724d07f691ee950ac7eaf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/default/t1/c5f90b9a4a5724d07f691ee950ac7eaf/cf1 2023-07-17 22:15:53,146 DEBUG [StoreOpener-c5f90b9a4a5724d07f691ee950ac7eaf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/default/t1/c5f90b9a4a5724d07f691ee950ac7eaf/cf1 2023-07-17 22:15:53,146 INFO [StoreOpener-c5f90b9a4a5724d07f691ee950ac7eaf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c5f90b9a4a5724d07f691ee950ac7eaf columnFamilyName cf1 2023-07-17 22:15:53,147 INFO [StoreOpener-c5f90b9a4a5724d07f691ee950ac7eaf-1] regionserver.HStore(310): Store=c5f90b9a4a5724d07f691ee950ac7eaf/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 22:15:53,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/default/t1/c5f90b9a4a5724d07f691ee950ac7eaf 2023-07-17 22:15:53,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/default/t1/c5f90b9a4a5724d07f691ee950ac7eaf 2023-07-17 22:15:53,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c5f90b9a4a5724d07f691ee950ac7eaf 2023-07-17 22:15:53,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/default/t1/c5f90b9a4a5724d07f691ee950ac7eaf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 22:15:53,154 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c5f90b9a4a5724d07f691ee950ac7eaf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10678961280, jitterRate=-0.005444228649139404}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 22:15:53,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c5f90b9a4a5724d07f691ee950ac7eaf: 2023-07-17 22:15:53,155 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf., pid=14, masterSystemTime=1689632153138 2023-07-17 22:15:53,159 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf. 2023-07-17 22:15:53,159 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf. 2023-07-17 22:15:53,160 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=c5f90b9a4a5724d07f691ee950ac7eaf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33873,1689632152536 2023-07-17 22:15:53,160 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689632153160"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689632153160"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689632153160"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689632153160"}]},"ts":"1689632153160"} 2023-07-17 22:15:53,162 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-17 22:15:53,162 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure c5f90b9a4a5724d07f691ee950ac7eaf, server=jenkins-hbase4.apache.org,33873,1689632152536 in 175 msec 2023-07-17 22:15:53,164 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-17 22:15:53,164 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=c5f90b9a4a5724d07f691ee950ac7eaf, ASSIGN in 331 msec 2023-07-17 22:15:53,164 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 22:15:53,164 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632153164"}]},"ts":"1689632153164"} 2023-07-17 22:15:53,165 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-17 22:15:53,167 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 22:15:53,168 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 383 msec 2023-07-17 22:15:53,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-17 22:15:53,391 INFO [Listener at localhost/44229] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-17 22:15:53,392 DEBUG [Listener at localhost/44229] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-17 22:15:53,392 INFO [Listener at localhost/44229] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:53,394 INFO [Listener at localhost/44229] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-17 22:15:53,394 INFO [Listener at localhost/44229] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:53,394 INFO [Listener at localhost/44229] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-17 22:15:53,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 22:15:53,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-17 22:15:53,398 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 22:15:53,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-17 22:15:53,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 354 connection: 172.31.14.131:54792 deadline: 1689632213395, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-17 22:15:53,400 INFO [Listener at localhost/44229] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:53,401 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=5 msec 2023-07-17 22:15:53,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:53,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:53,502 INFO [Listener at localhost/44229] client.HBaseAdmin$15(890): Started disable of t1 2023-07-17 22:15:53,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-17 22:15:53,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-17 22:15:53,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-17 22:15:53,506 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632153506"}]},"ts":"1689632153506"} 2023-07-17 22:15:53,508 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-17 22:15:53,511 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-17 22:15:53,512 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=c5f90b9a4a5724d07f691ee950ac7eaf, UNASSIGN}] 2023-07-17 22:15:53,512 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=c5f90b9a4a5724d07f691ee950ac7eaf, UNASSIGN 2023-07-17 22:15:53,513 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=c5f90b9a4a5724d07f691ee950ac7eaf, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33873,1689632152536 2023-07-17 22:15:53,513 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689632153513"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689632153513"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689632153513"}]},"ts":"1689632153513"} 2023-07-17 22:15:53,514 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure c5f90b9a4a5724d07f691ee950ac7eaf, server=jenkins-hbase4.apache.org,33873,1689632152536}] 2023-07-17 22:15:53,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-17 22:15:53,666 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c5f90b9a4a5724d07f691ee950ac7eaf 2023-07-17 22:15:53,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c5f90b9a4a5724d07f691ee950ac7eaf, disabling compactions & flushes 2023-07-17 22:15:53,667 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf. 2023-07-17 22:15:53,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf. 2023-07-17 22:15:53,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf. after waiting 0 ms 2023-07-17 22:15:53,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf. 2023-07-17 22:15:53,671 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/default/t1/c5f90b9a4a5724d07f691ee950ac7eaf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 22:15:53,671 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf. 2023-07-17 22:15:53,672 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c5f90b9a4a5724d07f691ee950ac7eaf: 2023-07-17 22:15:53,673 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c5f90b9a4a5724d07f691ee950ac7eaf 2023-07-17 22:15:53,673 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=c5f90b9a4a5724d07f691ee950ac7eaf, regionState=CLOSED 2023-07-17 22:15:53,674 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689632153673"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689632153673"}]},"ts":"1689632153673"} 2023-07-17 22:15:53,676 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-17 22:15:53,676 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure c5f90b9a4a5724d07f691ee950ac7eaf, server=jenkins-hbase4.apache.org,33873,1689632152536 in 161 msec 2023-07-17 22:15:53,678 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-17 22:15:53,678 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=c5f90b9a4a5724d07f691ee950ac7eaf, UNASSIGN in 164 msec 2023-07-17 22:15:53,685 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689632153685"}]},"ts":"1689632153685"} 2023-07-17 22:15:53,686 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-17 22:15:53,688 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-17 22:15:53,691 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 186 msec 2023-07-17 22:15:53,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-17 22:15:53,808 INFO [Listener at localhost/44229] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-17 22:15:53,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-17 22:15:53,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-17 22:15:53,812 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-17 22:15:53,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-17 22:15:53,813 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-17 22:15:53,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:53,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:53,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:53,817 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp/data/default/t1/c5f90b9a4a5724d07f691ee950ac7eaf 2023-07-17 22:15:53,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-17 22:15:53,818 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp/data/default/t1/c5f90b9a4a5724d07f691ee950ac7eaf/cf1, FileablePath, hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp/data/default/t1/c5f90b9a4a5724d07f691ee950ac7eaf/recovered.edits] 2023-07-17 22:15:53,823 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp/data/default/t1/c5f90b9a4a5724d07f691ee950ac7eaf/recovered.edits/4.seqid to hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/archive/data/default/t1/c5f90b9a4a5724d07f691ee950ac7eaf/recovered.edits/4.seqid 2023-07-17 22:15:53,823 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/.tmp/data/default/t1/c5f90b9a4a5724d07f691ee950ac7eaf 2023-07-17 22:15:53,823 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-17 22:15:53,825 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-17 22:15:53,826 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-17 22:15:53,828 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-17 22:15:53,829 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-17 22:15:53,829 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-17 22:15:53,829 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689632153829"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:53,830 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-17 22:15:53,830 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => c5f90b9a4a5724d07f691ee950ac7eaf, NAME => 't1,,1689632152784.c5f90b9a4a5724d07f691ee950ac7eaf.', STARTKEY => '', ENDKEY => ''}] 2023-07-17 22:15:53,830 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-17 22:15:53,830 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689632153830"}]},"ts":"9223372036854775807"} 2023-07-17 22:15:53,831 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-17 22:15:53,834 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-17 22:15:53,835 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 25 msec 2023-07-17 22:15:53,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-17 22:15:53,918 INFO [Listener at localhost/44229] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-17 22:15:53,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:53,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:53,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:53,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:53,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:53,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:53,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:53,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:53,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:53,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:53,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:53,938 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:53,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:53,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:53,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:53,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:53,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:53,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:53,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:53,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37449] to rsgroup master 2023-07-17 22:15:53,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:53,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:54792 deadline: 1689633353948, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. 2023-07-17 22:15:53,948 WARN [Listener at localhost/44229] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:53,953 INFO [Listener at localhost/44229] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:53,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:53,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:53,954 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32813, jenkins-hbase4.apache.org:33873, jenkins-hbase4.apache.org:34241, jenkins-hbase4.apache.org:36311], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:53,954 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:53,954 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:53,975 INFO [Listener at localhost/44229] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=576 (was 564) - Thread LEAK? -, OpenFileDescriptor=858 (was 865), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=422 (was 422), ProcessCount=172 (was 172), AvailableMemoryMB=2708 (was 2731) 2023-07-17 22:15:53,975 WARN [Listener at localhost/44229] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-17 22:15:53,998 INFO [Listener at localhost/44229] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=576, OpenFileDescriptor=858, MaxFileDescriptor=60000, SystemLoadAverage=422, ProcessCount=172, AvailableMemoryMB=2708 2023-07-17 22:15:53,998 WARN [Listener at localhost/44229] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-17 22:15:53,998 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-17 22:15:54,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:54,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:54,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:54,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:54,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:54,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:54,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:54,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:54,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:54,016 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:54,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:54,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:54,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:54,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:54,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:54,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37449] to rsgroup master 2023-07-17 22:15:54,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:54,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54792 deadline: 1689633354031, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. 2023-07-17 22:15:54,032 WARN [Listener at localhost/44229] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:54,034 INFO [Listener at localhost/44229] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:54,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,036 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32813, jenkins-hbase4.apache.org:33873, jenkins-hbase4.apache.org:34241, jenkins-hbase4.apache.org:36311], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:54,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:54,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:54,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-17 22:15:54,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 22:15:54,039 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-17 22:15:54,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-17 22:15:54,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 22:15:54,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:54,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:54,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:54,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:54,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:54,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:54,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:54,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:54,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:54,064 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:54,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:54,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:54,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:54,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:54,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:54,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37449] to rsgroup master 2023-07-17 22:15:54,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:54,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54792 deadline: 1689633354076, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. 2023-07-17 22:15:54,077 WARN [Listener at localhost/44229] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:54,079 INFO [Listener at localhost/44229] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:54,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,080 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32813, jenkins-hbase4.apache.org:33873, jenkins-hbase4.apache.org:34241, jenkins-hbase4.apache.org:36311], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:54,081 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:54,081 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:54,113 INFO [Listener at localhost/44229] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=576 (was 576), OpenFileDescriptor=856 (was 858), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=422 (was 422), ProcessCount=172 (was 172), AvailableMemoryMB=2709 (was 2708) - AvailableMemoryMB LEAK? - 2023-07-17 22:15:54,113 WARN [Listener at localhost/44229] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-17 22:15:54,131 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-17 22:15:54,144 INFO [Listener at localhost/44229] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=576, OpenFileDescriptor=856, MaxFileDescriptor=60000, SystemLoadAverage=422, ProcessCount=172, AvailableMemoryMB=2709 2023-07-17 22:15:54,144 WARN [Listener at localhost/44229] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-17 22:15:54,144 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-17 22:15:54,150 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,150 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:54,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:54,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:54,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:54,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:54,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:54,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:54,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:54,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:54,169 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:54,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:54,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:54,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:54,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:54,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:54,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37449] to rsgroup master 2023-07-17 22:15:54,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:54,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54792 deadline: 1689633354196, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. 2023-07-17 22:15:54,197 WARN [Listener at localhost/44229] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:54,199 INFO [Listener at localhost/44229] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:54,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,200 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32813, jenkins-hbase4.apache.org:33873, jenkins-hbase4.apache.org:34241, jenkins-hbase4.apache.org:36311], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:54,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:54,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:54,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:54,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:54,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:54,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:54,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:54,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:54,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:54,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:54,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:54,223 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:54,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:54,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:54,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:54,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:54,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:54,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37449] to rsgroup master 2023-07-17 22:15:54,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:54,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54792 deadline: 1689633354238, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. 2023-07-17 22:15:54,244 WARN [Listener at localhost/44229] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:54,245 INFO [Listener at localhost/44229] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:54,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,247 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32813, jenkins-hbase4.apache.org:33873, jenkins-hbase4.apache.org:34241, jenkins-hbase4.apache.org:36311], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:54,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:54,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:54,277 INFO [Listener at localhost/44229] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=576 (was 576), OpenFileDescriptor=853 (was 856), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=422 (was 422), ProcessCount=172 (was 172), AvailableMemoryMB=2713 (was 2709) - AvailableMemoryMB LEAK? - 2023-07-17 22:15:54,277 WARN [Listener at localhost/44229] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-17 22:15:54,316 INFO [Listener at localhost/44229] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=575, OpenFileDescriptor=853, MaxFileDescriptor=60000, SystemLoadAverage=422, ProcessCount=172, AvailableMemoryMB=2716 2023-07-17 22:15:54,316 WARN [Listener at localhost/44229] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-17 22:15:54,316 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-17 22:15:54,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:54,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:54,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:54,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:54,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:54,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:54,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:54,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:54,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:54,335 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:54,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:54,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:54,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:54,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:54,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:54,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37449] to rsgroup master 2023-07-17 22:15:54,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:54,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54792 deadline: 1689633354347, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. 2023-07-17 22:15:54,348 WARN [Listener at localhost/44229] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:54,349 INFO [Listener at localhost/44229] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:54,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,350 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32813, jenkins-hbase4.apache.org:33873, jenkins-hbase4.apache.org:34241, jenkins-hbase4.apache.org:36311], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:54,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:54,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:54,351 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-17 22:15:54,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-17 22:15:54,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-17 22:15:54,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:54,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:54,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 22:15:54,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:54,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-17 22:15:54,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-17 22:15:54,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-17 22:15:54,369 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 22:15:54,372 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-17 22:15:54,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-17 22:15:54,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-17 22:15:54,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:54,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:54792 deadline: 1689633354466, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-17 22:15:54,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-17 22:15:54,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-17 22:15:54,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-17 22:15:54,487 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-17 22:15:54,488 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 13 msec 2023-07-17 22:15:54,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-17 22:15:54,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-17 22:15:54,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-17 22:15:54,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:54,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-17 22:15:54,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:54,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 22:15:54,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:54,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-17 22:15:54,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-17 22:15:54,605 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-17 22:15:54,607 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-17 22:15:54,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-17 22:15:54,609 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-17 22:15:54,610 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-17 22:15:54,610 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 22:15:54,611 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-17 22:15:54,612 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-17 22:15:54,613 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-17 22:15:54,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-17 22:15:54,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-17 22:15:54,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-17 22:15:54,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:54,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:54,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-17 22:15:54,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:54,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:54,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:54792 deadline: 1689632214724, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-17 22:15:54,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:54,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:54,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:54,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:54,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:54,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-17 22:15:54,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:54,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:54,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-17 22:15:54,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:54,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 22:15:54,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 22:15:54,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 22:15:54,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 22:15:54,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 22:15:54,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 22:15:54,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:54,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 22:15:54,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 22:15:54,742 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 22:15:54,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 22:15:54,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 22:15:54,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 22:15:54,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 22:15:54,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 22:15:54,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37449] to rsgroup master 2023-07-17 22:15:54,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 22:15:54,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54792 deadline: 1689633354751, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. 2023-07-17 22:15:54,751 WARN [Listener at localhost/44229] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37449 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 22:15:54,753 INFO [Listener at localhost/44229] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 22:15:54,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 22:15:54,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 22:15:54,754 INFO [Listener at localhost/44229] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32813, jenkins-hbase4.apache.org:33873, jenkins-hbase4.apache.org:34241, jenkins-hbase4.apache.org:36311], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 22:15:54,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 22:15:54,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 22:15:54,772 INFO [Listener at localhost/44229] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=574 (was 575), OpenFileDescriptor=846 (was 853), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=422 (was 422), ProcessCount=172 (was 172), AvailableMemoryMB=2730 (was 2716) - AvailableMemoryMB LEAK? - 2023-07-17 22:15:54,772 WARN [Listener at localhost/44229] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-17 22:15:54,772 INFO [Listener at localhost/44229] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-17 22:15:54,772 INFO [Listener at localhost/44229] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-17 22:15:54,772 DEBUG [Listener at localhost/44229] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1ecadc90 to 127.0.0.1:53229 2023-07-17 22:15:54,772 DEBUG [Listener at localhost/44229] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:54,772 DEBUG [Listener at localhost/44229] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-17 22:15:54,772 DEBUG [Listener at localhost/44229] util.JVMClusterUtil(257): Found active master hash=1387508068, stopped=false 2023-07-17 22:15:54,772 DEBUG [Listener at localhost/44229] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-17 22:15:54,773 DEBUG [Listener at localhost/44229] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-17 22:15:54,773 INFO [Listener at localhost/44229] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,37449,1689632151185 2023-07-17 22:15:54,774 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:54,774 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:33873-0x101755b1977000b, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:54,774 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:54,774 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:54,774 INFO [Listener at localhost/44229] procedure2.ProcedureExecutor(629): Stopping 2023-07-17 22:15:54,774 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:54,774 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 22:15:54,774 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:54,774 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:54,775 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33873-0x101755b1977000b, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:54,775 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:54,775 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 22:15:54,775 DEBUG [Listener at localhost/44229] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x775834a5 to 127.0.0.1:53229 2023-07-17 22:15:54,775 DEBUG [Listener at localhost/44229] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:54,775 INFO [Listener at localhost/44229] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,32813,1689632151260' ***** 2023-07-17 22:15:54,775 INFO [Listener at localhost/44229] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 22:15:54,775 INFO [Listener at localhost/44229] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34241,1689632151301' ***** 2023-07-17 22:15:54,775 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 22:15:54,775 INFO [Listener at localhost/44229] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 22:15:54,775 INFO [Listener at localhost/44229] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36311,1689632151335' ***** 2023-07-17 22:15:54,776 INFO [Listener at localhost/44229] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 22:15:54,776 INFO [RS:1;jenkins-hbase4:34241] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 22:15:54,776 INFO [RS:2;jenkins-hbase4:36311] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 22:15:54,776 INFO [Listener at localhost/44229] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33873,1689632152536' ***** 2023-07-17 22:15:54,778 INFO [Listener at localhost/44229] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 22:15:54,778 INFO [RS:3;jenkins-hbase4:33873] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 22:15:54,781 INFO [RS:3;jenkins-hbase4:33873] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@307a6dc1{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:54,781 INFO [RS:2;jenkins-hbase4:36311] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@46a2470b{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:54,781 INFO [RS:0;jenkins-hbase4:32813] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5a351aef{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:54,781 INFO [RS:1;jenkins-hbase4:34241] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@28b0b21b{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 22:15:54,782 INFO [RS:3;jenkins-hbase4:33873] server.AbstractConnector(383): Stopped ServerConnector@6ac5b445{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 22:15:54,782 INFO [RS:3;jenkins-hbase4:33873] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 22:15:54,782 INFO [RS:1;jenkins-hbase4:34241] server.AbstractConnector(383): Stopped ServerConnector@5bd01aa{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 22:15:54,782 INFO [RS:2;jenkins-hbase4:36311] server.AbstractConnector(383): Stopped ServerConnector@85cae1c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 22:15:54,782 INFO [RS:0;jenkins-hbase4:32813] server.AbstractConnector(383): Stopped ServerConnector@879cd83{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 22:15:54,783 INFO [RS:2;jenkins-hbase4:36311] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 22:15:54,783 INFO [RS:3;jenkins-hbase4:33873] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@65d032f9{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 22:15:54,783 INFO [RS:1;jenkins-hbase4:34241] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 22:15:54,784 INFO [RS:2;jenkins-hbase4:36311] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5e2487a8{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 22:15:54,783 INFO [RS:0;jenkins-hbase4:32813] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 22:15:54,785 INFO [RS:1;jenkins-hbase4:34241] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2c9668cb{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 22:15:54,786 INFO [RS:2;jenkins-hbase4:36311] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@624f13b8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/hadoop.log.dir/,STOPPED} 2023-07-17 22:15:54,787 INFO [RS:1;jenkins-hbase4:34241] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@59e8e2d8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/hadoop.log.dir/,STOPPED} 2023-07-17 22:15:54,784 INFO [RS:3;jenkins-hbase4:33873] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7eac29c2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/hadoop.log.dir/,STOPPED} 2023-07-17 22:15:54,787 INFO [RS:0;jenkins-hbase4:32813] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@27b9cbdb{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 22:15:54,788 INFO [RS:0;jenkins-hbase4:32813] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@19753151{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/hadoop.log.dir/,STOPPED} 2023-07-17 22:15:54,788 INFO [RS:3;jenkins-hbase4:33873] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 22:15:54,788 INFO [RS:2;jenkins-hbase4:36311] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 22:15:54,788 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 22:15:54,788 INFO [RS:3;jenkins-hbase4:33873] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 22:15:54,788 INFO [RS:3;jenkins-hbase4:33873] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 22:15:54,788 INFO [RS:3;jenkins-hbase4:33873] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33873,1689632152536 2023-07-17 22:15:54,788 DEBUG [RS:3;jenkins-hbase4:33873] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2cccd12a to 127.0.0.1:53229 2023-07-17 22:15:54,788 INFO [RS:1;jenkins-hbase4:34241] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 22:15:54,789 DEBUG [RS:3;jenkins-hbase4:33873] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:54,789 INFO [RS:0;jenkins-hbase4:32813] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 22:15:54,789 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 22:15:54,788 INFO [RS:2;jenkins-hbase4:36311] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 22:15:54,788 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 22:15:54,789 INFO [RS:2;jenkins-hbase4:36311] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 22:15:54,789 INFO [RS:0;jenkins-hbase4:32813] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 22:15:54,789 INFO [RS:0;jenkins-hbase4:32813] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 22:15:54,789 INFO [RS:3;jenkins-hbase4:33873] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33873,1689632152536; all regions closed. 2023-07-17 22:15:54,789 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 22:15:54,789 INFO [RS:1;jenkins-hbase4:34241] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 22:15:54,789 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(3305): Received CLOSE for 25cb52fd09d0e74e3e55de2cb1287a63 2023-07-17 22:15:54,789 INFO [RS:2;jenkins-hbase4:36311] regionserver.HRegionServer(3305): Received CLOSE for 7f45af643b80ad8d1187de5cd9d7385c 2023-07-17 22:15:54,789 INFO [RS:1;jenkins-hbase4:34241] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 22:15:54,790 INFO [RS:1;jenkins-hbase4:34241] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34241,1689632151301 2023-07-17 22:15:54,790 DEBUG [RS:1;jenkins-hbase4:34241] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2d28b3ea to 127.0.0.1:53229 2023-07-17 22:15:54,790 DEBUG [RS:1;jenkins-hbase4:34241] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:54,790 INFO [RS:1;jenkins-hbase4:34241] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 22:15:54,790 INFO [RS:1;jenkins-hbase4:34241] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 22:15:54,790 INFO [RS:1;jenkins-hbase4:34241] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 22:15:54,790 INFO [RS:1;jenkins-hbase4:34241] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-17 22:15:54,790 INFO [RS:1;jenkins-hbase4:34241] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-17 22:15:54,790 DEBUG [RS:1;jenkins-hbase4:34241] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-17 22:15:54,790 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-17 22:15:54,791 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-17 22:15:54,791 DEBUG [RS:1;jenkins-hbase4:34241] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-17 22:15:54,791 INFO [RS:2;jenkins-hbase4:36311] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36311,1689632151335 2023-07-17 22:15:54,791 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-17 22:15:54,791 DEBUG [RS:2;jenkins-hbase4:36311] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7bee2bd3 to 127.0.0.1:53229 2023-07-17 22:15:54,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7f45af643b80ad8d1187de5cd9d7385c, disabling compactions & flushes 2023-07-17 22:15:54,791 DEBUG [RS:2;jenkins-hbase4:36311] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:54,791 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c. 2023-07-17 22:15:54,791 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-17 22:15:54,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 25cb52fd09d0e74e3e55de2cb1287a63, disabling compactions & flushes 2023-07-17 22:15:54,791 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,32813,1689632151260 2023-07-17 22:15:54,791 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63. 2023-07-17 22:15:54,791 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-17 22:15:54,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c. 2023-07-17 22:15:54,791 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-17 22:15:54,791 INFO [RS:2;jenkins-hbase4:36311] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-17 22:15:54,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c. after waiting 0 ms 2023-07-17 22:15:54,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c. 2023-07-17 22:15:54,791 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 7f45af643b80ad8d1187de5cd9d7385c 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-17 22:15:54,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63. 2023-07-17 22:15:54,791 DEBUG [RS:0;jenkins-hbase4:32813] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x77869323 to 127.0.0.1:53229 2023-07-17 22:15:54,792 DEBUG [RS:0;jenkins-hbase4:32813] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:54,792 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-17 22:15:54,792 DEBUG [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1478): Online Regions={25cb52fd09d0e74e3e55de2cb1287a63=hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63.} 2023-07-17 22:15:54,792 DEBUG [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1504): Waiting on 25cb52fd09d0e74e3e55de2cb1287a63 2023-07-17 22:15:54,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63. after waiting 0 ms 2023-07-17 22:15:54,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63. 2023-07-17 22:15:54,792 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 25cb52fd09d0e74e3e55de2cb1287a63 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-17 22:15:54,791 DEBUG [RS:2;jenkins-hbase4:36311] regionserver.HRegionServer(1478): Online Regions={7f45af643b80ad8d1187de5cd9d7385c=hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c.} 2023-07-17 22:15:54,794 DEBUG [RS:2;jenkins-hbase4:36311] regionserver.HRegionServer(1504): Waiting on 7f45af643b80ad8d1187de5cd9d7385c 2023-07-17 22:15:54,804 DEBUG [RS:3;jenkins-hbase4:33873] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/oldWALs 2023-07-17 22:15:54,804 INFO [RS:3;jenkins-hbase4:33873] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33873%2C1689632152536:(num 1689632152744) 2023-07-17 22:15:54,804 DEBUG [RS:3;jenkins-hbase4:33873] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:54,804 INFO [RS:3;jenkins-hbase4:33873] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:54,807 INFO [RS:3;jenkins-hbase4:33873] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-17 22:15:54,810 INFO [RS:3;jenkins-hbase4:33873] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 22:15:54,810 INFO [RS:3;jenkins-hbase4:33873] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 22:15:54,810 INFO [RS:3;jenkins-hbase4:33873] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 22:15:54,810 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 22:15:54,814 INFO [RS:3;jenkins-hbase4:33873] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33873 2023-07-17 22:15:54,827 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:54,830 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/.tmp/info/a250bf7a7821479ebd7834461bd40a84 2023-07-17 22:15:54,834 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/rsgroup/25cb52fd09d0e74e3e55de2cb1287a63/.tmp/m/6da90723701a46978c9b43f9102c4c4b 2023-07-17 22:15:54,834 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/namespace/7f45af643b80ad8d1187de5cd9d7385c/.tmp/info/2cda6505836a43589cd7bedd5445d016 2023-07-17 22:15:54,838 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a250bf7a7821479ebd7834461bd40a84 2023-07-17 22:15:54,840 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6da90723701a46978c9b43f9102c4c4b 2023-07-17 22:15:54,841 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/rsgroup/25cb52fd09d0e74e3e55de2cb1287a63/.tmp/m/6da90723701a46978c9b43f9102c4c4b as hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/rsgroup/25cb52fd09d0e74e3e55de2cb1287a63/m/6da90723701a46978c9b43f9102c4c4b 2023-07-17 22:15:54,845 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2cda6505836a43589cd7bedd5445d016 2023-07-17 22:15:54,846 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/namespace/7f45af643b80ad8d1187de5cd9d7385c/.tmp/info/2cda6505836a43589cd7bedd5445d016 as hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/namespace/7f45af643b80ad8d1187de5cd9d7385c/info/2cda6505836a43589cd7bedd5445d016 2023-07-17 22:15:54,847 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6da90723701a46978c9b43f9102c4c4b 2023-07-17 22:15:54,847 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/rsgroup/25cb52fd09d0e74e3e55de2cb1287a63/m/6da90723701a46978c9b43f9102c4c4b, entries=12, sequenceid=29, filesize=5.4 K 2023-07-17 22:15:54,848 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 25cb52fd09d0e74e3e55de2cb1287a63 in 56ms, sequenceid=29, compaction requested=false 2023-07-17 22:15:54,851 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2cda6505836a43589cd7bedd5445d016 2023-07-17 22:15:54,851 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/namespace/7f45af643b80ad8d1187de5cd9d7385c/info/2cda6505836a43589cd7bedd5445d016, entries=3, sequenceid=9, filesize=5.0 K 2023-07-17 22:15:54,852 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 7f45af643b80ad8d1187de5cd9d7385c in 61ms, sequenceid=9, compaction requested=false 2023-07-17 22:15:54,865 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:54,865 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/.tmp/rep_barrier/937d0fd02ac64a12b56bf7c2380654a6 2023-07-17 22:15:54,866 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:54,868 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:54,870 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/rsgroup/25cb52fd09d0e74e3e55de2cb1287a63/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-17 22:15:54,871 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-17 22:15:54,871 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/namespace/7f45af643b80ad8d1187de5cd9d7385c/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-17 22:15:54,872 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63. 2023-07-17 22:15:54,871 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 937d0fd02ac64a12b56bf7c2380654a6 2023-07-17 22:15:54,872 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 25cb52fd09d0e74e3e55de2cb1287a63: 2023-07-17 22:15:54,872 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689632152155.25cb52fd09d0e74e3e55de2cb1287a63. 2023-07-17 22:15:54,875 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c. 2023-07-17 22:15:54,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7f45af643b80ad8d1187de5cd9d7385c: 2023-07-17 22:15:54,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689632152018.7f45af643b80ad8d1187de5cd9d7385c. 2023-07-17 22:15:54,891 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/.tmp/table/cba00041b2c24e13aadab71c52a9dee5 2023-07-17 22:15:54,892 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33873,1689632152536 2023-07-17 22:15:54,892 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33873,1689632152536 2023-07-17 22:15:54,892 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:54,892 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33873,1689632152536 2023-07-17 22:15:54,892 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:33873-0x101755b1977000b, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33873,1689632152536 2023-07-17 22:15:54,892 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:54,892 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:54,892 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:33873-0x101755b1977000b, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:54,893 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:54,896 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cba00041b2c24e13aadab71c52a9dee5 2023-07-17 22:15:54,897 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/.tmp/info/a250bf7a7821479ebd7834461bd40a84 as hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/info/a250bf7a7821479ebd7834461bd40a84 2023-07-17 22:15:54,902 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a250bf7a7821479ebd7834461bd40a84 2023-07-17 22:15:54,902 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/info/a250bf7a7821479ebd7834461bd40a84, entries=22, sequenceid=26, filesize=7.3 K 2023-07-17 22:15:54,903 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/.tmp/rep_barrier/937d0fd02ac64a12b56bf7c2380654a6 as hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/rep_barrier/937d0fd02ac64a12b56bf7c2380654a6 2023-07-17 22:15:54,907 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 937d0fd02ac64a12b56bf7c2380654a6 2023-07-17 22:15:54,908 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/rep_barrier/937d0fd02ac64a12b56bf7c2380654a6, entries=1, sequenceid=26, filesize=4.9 K 2023-07-17 22:15:54,908 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/.tmp/table/cba00041b2c24e13aadab71c52a9dee5 as hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/table/cba00041b2c24e13aadab71c52a9dee5 2023-07-17 22:15:54,912 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cba00041b2c24e13aadab71c52a9dee5 2023-07-17 22:15:54,913 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/table/cba00041b2c24e13aadab71c52a9dee5, entries=6, sequenceid=26, filesize=5.1 K 2023-07-17 22:15:54,913 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 122ms, sequenceid=26, compaction requested=false 2023-07-17 22:15:54,921 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-17 22:15:54,921 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-17 22:15:54,922 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-17 22:15:54,922 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-17 22:15:54,922 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-17 22:15:54,991 INFO [RS:1;jenkins-hbase4:34241] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34241,1689632151301; all regions closed. 2023-07-17 22:15:54,992 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,32813,1689632151260; all regions closed. 2023-07-17 22:15:54,994 INFO [RS:2;jenkins-hbase4:36311] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36311,1689632151335; all regions closed. 2023-07-17 22:15:54,996 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33873,1689632152536] 2023-07-17 22:15:54,996 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/WALs/jenkins-hbase4.apache.org,32813,1689632151260/jenkins-hbase4.apache.org%2C32813%2C1689632151260.1689632151799 not finished, retry = 0 2023-07-17 22:15:54,996 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33873,1689632152536; numProcessing=1 2023-07-17 22:15:54,997 DEBUG [RS:1;jenkins-hbase4:34241] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/oldWALs 2023-07-17 22:15:54,997 INFO [RS:1;jenkins-hbase4:34241] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34241%2C1689632151301.meta:.meta(num 1689632151968) 2023-07-17 22:15:54,998 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33873,1689632152536 already deleted, retry=false 2023-07-17 22:15:54,998 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33873,1689632152536 expired; onlineServers=3 2023-07-17 22:15:55,001 DEBUG [RS:2;jenkins-hbase4:36311] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/oldWALs 2023-07-17 22:15:55,001 INFO [RS:2;jenkins-hbase4:36311] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36311%2C1689632151335:(num 1689632151800) 2023-07-17 22:15:55,001 DEBUG [RS:2;jenkins-hbase4:36311] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:55,001 INFO [RS:2;jenkins-hbase4:36311] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:55,001 INFO [RS:2;jenkins-hbase4:36311] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-17 22:15:55,001 INFO [RS:2;jenkins-hbase4:36311] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 22:15:55,001 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 22:15:55,001 INFO [RS:2;jenkins-hbase4:36311] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 22:15:55,001 INFO [RS:2;jenkins-hbase4:36311] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 22:15:55,002 INFO [RS:2;jenkins-hbase4:36311] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36311 2023-07-17 22:15:55,003 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36311,1689632151335 2023-07-17 22:15:55,004 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36311,1689632151335 2023-07-17 22:15:55,004 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36311,1689632151335 2023-07-17 22:15:55,004 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:55,004 DEBUG [RS:1;jenkins-hbase4:34241] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/oldWALs 2023-07-17 22:15:55,004 INFO [RS:1;jenkins-hbase4:34241] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34241%2C1689632151301:(num 1689632151799) 2023-07-17 22:15:55,004 DEBUG [RS:1;jenkins-hbase4:34241] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:55,004 INFO [RS:1;jenkins-hbase4:34241] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:55,004 INFO [RS:1;jenkins-hbase4:34241] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-17 22:15:55,004 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 22:15:55,005 INFO [RS:1;jenkins-hbase4:34241] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34241 2023-07-17 22:15:55,005 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36311,1689632151335] 2023-07-17 22:15:55,005 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36311,1689632151335; numProcessing=2 2023-07-17 22:15:55,008 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36311,1689632151335 already deleted, retry=false 2023-07-17 22:15:55,008 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34241,1689632151301 2023-07-17 22:15:55,008 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36311,1689632151335 expired; onlineServers=2 2023-07-17 22:15:55,008 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:55,008 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34241,1689632151301 2023-07-17 22:15:55,009 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34241,1689632151301] 2023-07-17 22:15:55,009 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34241,1689632151301; numProcessing=3 2023-07-17 22:15:55,010 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34241,1689632151301 already deleted, retry=false 2023-07-17 22:15:55,010 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34241,1689632151301 expired; onlineServers=1 2023-07-17 22:15:55,099 DEBUG [RS:0;jenkins-hbase4:32813] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/oldWALs 2023-07-17 22:15:55,099 INFO [RS:0;jenkins-hbase4:32813] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C32813%2C1689632151260:(num 1689632151799) 2023-07-17 22:15:55,099 DEBUG [RS:0;jenkins-hbase4:32813] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:55,099 INFO [RS:0;jenkins-hbase4:32813] regionserver.LeaseManager(133): Closed leases 2023-07-17 22:15:55,099 INFO [RS:0;jenkins-hbase4:32813] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-17 22:15:55,099 INFO [RS:0;jenkins-hbase4:32813] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 22:15:55,100 INFO [RS:0;jenkins-hbase4:32813] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 22:15:55,100 INFO [RS:0;jenkins-hbase4:32813] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 22:15:55,100 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 22:15:55,101 INFO [RS:0;jenkins-hbase4:32813] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:32813 2023-07-17 22:15:55,102 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32813,1689632151260 2023-07-17 22:15:55,102 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 22:15:55,103 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,32813,1689632151260] 2023-07-17 22:15:55,103 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,32813,1689632151260; numProcessing=4 2023-07-17 22:15:55,104 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,32813,1689632151260 already deleted, retry=false 2023-07-17 22:15:55,104 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,32813,1689632151260 expired; onlineServers=0 2023-07-17 22:15:55,104 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37449,1689632151185' ***** 2023-07-17 22:15:55,105 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-17 22:15:55,105 DEBUG [M:0;jenkins-hbase4:37449] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@568c3fea, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 22:15:55,105 INFO [M:0;jenkins-hbase4:37449] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 22:15:55,107 INFO [M:0;jenkins-hbase4:37449] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@17bb529f{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-17 22:15:55,108 INFO [M:0;jenkins-hbase4:37449] server.AbstractConnector(383): Stopped ServerConnector@43a47f7f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 22:15:55,108 INFO [M:0;jenkins-hbase4:37449] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 22:15:55,108 INFO [M:0;jenkins-hbase4:37449] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4f269b40{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 22:15:55,109 INFO [M:0;jenkins-hbase4:37449] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@257227ca{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/hadoop.log.dir/,STOPPED} 2023-07-17 22:15:55,109 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-17 22:15:55,109 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 22:15:55,109 INFO [M:0;jenkins-hbase4:37449] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37449,1689632151185 2023-07-17 22:15:55,109 INFO [M:0;jenkins-hbase4:37449] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37449,1689632151185; all regions closed. 2023-07-17 22:15:55,109 DEBUG [M:0;jenkins-hbase4:37449] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 22:15:55,109 INFO [M:0;jenkins-hbase4:37449] master.HMaster(1491): Stopping master jetty server 2023-07-17 22:15:55,109 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 22:15:55,110 INFO [M:0;jenkins-hbase4:37449] server.AbstractConnector(383): Stopped ServerConnector@13e6a5f3{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 22:15:55,110 DEBUG [M:0;jenkins-hbase4:37449] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-17 22:15:55,110 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-17 22:15:55,110 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689632151552] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689632151552,5,FailOnTimeoutGroup] 2023-07-17 22:15:55,110 DEBUG [M:0;jenkins-hbase4:37449] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-17 22:15:55,110 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689632151552] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689632151552,5,FailOnTimeoutGroup] 2023-07-17 22:15:55,111 INFO [M:0;jenkins-hbase4:37449] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-17 22:15:55,111 INFO [M:0;jenkins-hbase4:37449] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-17 22:15:55,111 INFO [M:0;jenkins-hbase4:37449] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-17 22:15:55,111 DEBUG [M:0;jenkins-hbase4:37449] master.HMaster(1512): Stopping service threads 2023-07-17 22:15:55,111 INFO [M:0;jenkins-hbase4:37449] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-17 22:15:55,111 ERROR [M:0;jenkins-hbase4:37449] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-17 22:15:55,111 INFO [M:0;jenkins-hbase4:37449] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-17 22:15:55,111 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-17 22:15:55,111 DEBUG [M:0;jenkins-hbase4:37449] zookeeper.ZKUtil(398): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-17 22:15:55,112 WARN [M:0;jenkins-hbase4:37449] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-17 22:15:55,112 INFO [M:0;jenkins-hbase4:37449] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-17 22:15:55,112 INFO [M:0;jenkins-hbase4:37449] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-17 22:15:55,112 DEBUG [M:0;jenkins-hbase4:37449] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-17 22:15:55,112 INFO [M:0;jenkins-hbase4:37449] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:55,112 DEBUG [M:0;jenkins-hbase4:37449] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:55,112 DEBUG [M:0;jenkins-hbase4:37449] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-17 22:15:55,112 DEBUG [M:0;jenkins-hbase4:37449] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:55,112 INFO [M:0;jenkins-hbase4:37449] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.22 KB heapSize=90.66 KB 2023-07-17 22:15:55,123 INFO [M:0;jenkins-hbase4:37449] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.22 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2ac71f9b4f5c4077a8645312c51dac77 2023-07-17 22:15:55,128 DEBUG [M:0;jenkins-hbase4:37449] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2ac71f9b4f5c4077a8645312c51dac77 as hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2ac71f9b4f5c4077a8645312c51dac77 2023-07-17 22:15:55,133 INFO [M:0;jenkins-hbase4:37449] regionserver.HStore(1080): Added hdfs://localhost:46771/user/jenkins/test-data/e66a63e2-1e57-04b4-987e-99e25c0f11bb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2ac71f9b4f5c4077a8645312c51dac77, entries=22, sequenceid=175, filesize=11.1 K 2023-07-17 22:15:55,133 INFO [M:0;jenkins-hbase4:37449] regionserver.HRegion(2948): Finished flush of dataSize ~76.22 KB/78048, heapSize ~90.64 KB/92816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 21ms, sequenceid=175, compaction requested=false 2023-07-17 22:15:55,135 INFO [M:0;jenkins-hbase4:37449] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 22:15:55,135 DEBUG [M:0;jenkins-hbase4:37449] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-17 22:15:55,138 INFO [M:0;jenkins-hbase4:37449] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-17 22:15:55,138 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 22:15:55,139 INFO [M:0;jenkins-hbase4:37449] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37449 2023-07-17 22:15:55,140 DEBUG [M:0;jenkins-hbase4:37449] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,37449,1689632151185 already deleted, retry=false 2023-07-17 22:15:55,176 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:55,176 INFO [RS:1;jenkins-hbase4:34241] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34241,1689632151301; zookeeper connection closed. 2023-07-17 22:15:55,176 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:34241-0x101755b19770002, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:55,176 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7b934fa7] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7b934fa7 2023-07-17 22:15:55,276 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:55,276 INFO [RS:2;jenkins-hbase4:36311] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36311,1689632151335; zookeeper connection closed. 2023-07-17 22:15:55,276 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:36311-0x101755b19770003, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:55,277 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@16f9d204] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@16f9d204 2023-07-17 22:15:55,376 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:33873-0x101755b1977000b, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:55,376 INFO [RS:3;jenkins-hbase4:33873] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33873,1689632152536; zookeeper connection closed. 2023-07-17 22:15:55,377 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:33873-0x101755b1977000b, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:55,377 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@199c7f41] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@199c7f41 2023-07-17 22:15:55,577 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:55,577 INFO [M:0;jenkins-hbase4:37449] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37449,1689632151185; zookeeper connection closed. 2023-07-17 22:15:55,577 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): master:37449-0x101755b19770000, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:55,677 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:55,677 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,32813,1689632151260; zookeeper connection closed. 2023-07-17 22:15:55,677 DEBUG [Listener at localhost/44229-EventThread] zookeeper.ZKWatcher(600): regionserver:32813-0x101755b19770001, quorum=127.0.0.1:53229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 22:15:55,678 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7ab2442] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7ab2442 2023-07-17 22:15:55,678 INFO [Listener at localhost/44229] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-17 22:15:55,678 WARN [Listener at localhost/44229] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-17 22:15:55,681 INFO [Listener at localhost/44229] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 22:15:55,785 WARN [BP-715164769-172.31.14.131-1689632150312 heartbeating to localhost/127.0.0.1:46771] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-17 22:15:55,785 WARN [BP-715164769-172.31.14.131-1689632150312 heartbeating to localhost/127.0.0.1:46771] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-715164769-172.31.14.131-1689632150312 (Datanode Uuid 4e59c173-99d5-44bd-ac8e-2eed8a828332) service to localhost/127.0.0.1:46771 2023-07-17 22:15:55,786 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/dfs/data/data5/current/BP-715164769-172.31.14.131-1689632150312] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 22:15:55,786 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/dfs/data/data6/current/BP-715164769-172.31.14.131-1689632150312] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 22:15:55,787 WARN [Listener at localhost/44229] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-17 22:15:55,790 INFO [Listener at localhost/44229] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 22:15:55,892 WARN [BP-715164769-172.31.14.131-1689632150312 heartbeating to localhost/127.0.0.1:46771] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-17 22:15:55,892 WARN [BP-715164769-172.31.14.131-1689632150312 heartbeating to localhost/127.0.0.1:46771] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-715164769-172.31.14.131-1689632150312 (Datanode Uuid ce28f8a5-8b75-4507-ad20-d9ae63e623cf) service to localhost/127.0.0.1:46771 2023-07-17 22:15:55,893 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/dfs/data/data3/current/BP-715164769-172.31.14.131-1689632150312] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 22:15:55,893 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/dfs/data/data4/current/BP-715164769-172.31.14.131-1689632150312] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 22:15:55,894 WARN [Listener at localhost/44229] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-17 22:15:55,902 INFO [Listener at localhost/44229] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 22:15:56,004 WARN [BP-715164769-172.31.14.131-1689632150312 heartbeating to localhost/127.0.0.1:46771] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-17 22:15:56,004 WARN [BP-715164769-172.31.14.131-1689632150312 heartbeating to localhost/127.0.0.1:46771] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-715164769-172.31.14.131-1689632150312 (Datanode Uuid beab89f0-2fda-4c0b-b0a2-7ab9ea552411) service to localhost/127.0.0.1:46771 2023-07-17 22:15:56,005 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/dfs/data/data1/current/BP-715164769-172.31.14.131-1689632150312] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 22:15:56,006 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2e88a1ec-9afd-0a3a-637b-c8ae95e162e3/cluster_b91eec69-5760-bb68-9e30-074d096c4455/dfs/data/data2/current/BP-715164769-172.31.14.131-1689632150312] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 22:15:56,016 INFO [Listener at localhost/44229] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 22:15:56,131 INFO [Listener at localhost/44229] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-17 22:15:56,161 INFO [Listener at localhost/44229] hbase.HBaseTestingUtility(1293): Minicluster is down