2023-07-16 23:14:49,172 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4 2023-07-16 23:14:49,192 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-16 23:14:49,210 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-16 23:14:49,211 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/cluster_b14fde1a-1c3e-bdee-d7b9-5694b71ef229, deleteOnExit=true 2023-07-16 23:14:49,211 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-16 23:14:49,212 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/test.cache.data in system properties and HBase conf 2023-07-16 23:14:49,212 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/hadoop.tmp.dir in system properties and HBase conf 2023-07-16 23:14:49,213 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/hadoop.log.dir in system properties and HBase conf 2023-07-16 23:14:49,213 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-16 23:14:49,214 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-16 23:14:49,214 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-16 23:14:49,331 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-16 23:14:49,747 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-16 23:14:49,752 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-16 23:14:49,752 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-16 23:14:49,752 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-16 23:14:49,753 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 23:14:49,753 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-16 23:14:49,753 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-16 23:14:49,753 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 23:14:49,754 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 23:14:49,754 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-16 23:14:49,755 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/nfs.dump.dir in system properties and HBase conf 2023-07-16 23:14:49,755 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/java.io.tmpdir in system properties and HBase conf 2023-07-16 23:14:49,755 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 23:14:49,756 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-16 23:14:49,756 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-16 23:14:50,306 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 23:14:50,310 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 23:14:50,589 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-16 23:14:50,801 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-16 23:14:50,819 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 23:14:50,859 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 23:14:50,895 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/java.io.tmpdir/Jetty_localhost_43783_hdfs____bdmlvh/webapp 2023-07-16 23:14:51,046 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43783 2023-07-16 23:14:51,085 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 23:14:51,085 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 23:14:51,459 WARN [Listener at localhost/34675] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 23:14:51,538 WARN [Listener at localhost/34675] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 23:14:51,559 WARN [Listener at localhost/34675] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 23:14:51,565 INFO [Listener at localhost/34675] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 23:14:51,570 INFO [Listener at localhost/34675] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/java.io.tmpdir/Jetty_localhost_38333_datanode____teddf9/webapp 2023-07-16 23:14:51,687 INFO [Listener at localhost/34675] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38333 2023-07-16 23:14:52,252 WARN [Listener at localhost/33319] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 23:14:52,271 WARN [Listener at localhost/33319] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 23:14:52,279 WARN [Listener at localhost/33319] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 23:14:52,281 INFO [Listener at localhost/33319] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 23:14:52,288 INFO [Listener at localhost/33319] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/java.io.tmpdir/Jetty_localhost_45189_datanode____rcvq60/webapp 2023-07-16 23:14:52,445 INFO [Listener at localhost/33319] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45189 2023-07-16 23:14:52,486 WARN [Listener at localhost/45893] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 23:14:52,520 WARN [Listener at localhost/45893] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 23:14:52,523 WARN [Listener at localhost/45893] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 23:14:52,526 INFO [Listener at localhost/45893] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 23:14:52,531 INFO [Listener at localhost/45893] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/java.io.tmpdir/Jetty_localhost_41171_datanode____.xr19t7/webapp 2023-07-16 23:14:52,664 INFO [Listener at localhost/45893] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41171 2023-07-16 23:14:52,693 WARN [Listener at localhost/40131] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 23:14:52,904 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3d37ffa599118bed: Processing first storage report for DS-7aac909c-0053-4071-bacc-86c8683b259e from datanode ffb69bb2-fbad-48a3-bdb3-6dbdeceec12c 2023-07-16 23:14:52,906 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3d37ffa599118bed: from storage DS-7aac909c-0053-4071-bacc-86c8683b259e node DatanodeRegistration(127.0.0.1:35019, datanodeUuid=ffb69bb2-fbad-48a3-bdb3-6dbdeceec12c, infoPort=39975, infoSecurePort=0, ipcPort=33319, storageInfo=lv=-57;cid=testClusterID;nsid=1683268905;c=1689549290377), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-16 23:14:52,906 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x81824c1a11dda24: Processing first storage report for DS-f0cd7a4e-c855-48a4-9ece-d5b46f489b8e from datanode 940b67ca-7731-4a45-b3b2-b6cb647dfe14 2023-07-16 23:14:52,906 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x81824c1a11dda24: from storage DS-f0cd7a4e-c855-48a4-9ece-d5b46f489b8e node DatanodeRegistration(127.0.0.1:39013, datanodeUuid=940b67ca-7731-4a45-b3b2-b6cb647dfe14, infoPort=44069, infoSecurePort=0, ipcPort=45893, storageInfo=lv=-57;cid=testClusterID;nsid=1683268905;c=1689549290377), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 23:14:52,906 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8e08fe0277c34dd7: Processing first storage report for DS-cac95491-a5d8-4b6e-8b8f-24240dccb300 from datanode 868ca1ba-fb9c-4bc7-9f78-8e2c4cf64012 2023-07-16 23:14:52,907 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8e08fe0277c34dd7: from storage DS-cac95491-a5d8-4b6e-8b8f-24240dccb300 node DatanodeRegistration(127.0.0.1:39633, datanodeUuid=868ca1ba-fb9c-4bc7-9f78-8e2c4cf64012, infoPort=35259, infoSecurePort=0, ipcPort=40131, storageInfo=lv=-57;cid=testClusterID;nsid=1683268905;c=1689549290377), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-16 23:14:52,907 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3d37ffa599118bed: Processing first storage report for DS-876786aa-632f-49a6-aa74-b88c68d9c989 from datanode ffb69bb2-fbad-48a3-bdb3-6dbdeceec12c 2023-07-16 23:14:52,907 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3d37ffa599118bed: from storage DS-876786aa-632f-49a6-aa74-b88c68d9c989 node DatanodeRegistration(127.0.0.1:35019, datanodeUuid=ffb69bb2-fbad-48a3-bdb3-6dbdeceec12c, infoPort=39975, infoSecurePort=0, ipcPort=33319, storageInfo=lv=-57;cid=testClusterID;nsid=1683268905;c=1689549290377), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 23:14:52,907 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x81824c1a11dda24: Processing first storage report for DS-f495c75f-6d2c-456e-b171-54e928a699d1 from datanode 940b67ca-7731-4a45-b3b2-b6cb647dfe14 2023-07-16 23:14:52,907 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x81824c1a11dda24: from storage DS-f495c75f-6d2c-456e-b171-54e928a699d1 node DatanodeRegistration(127.0.0.1:39013, datanodeUuid=940b67ca-7731-4a45-b3b2-b6cb647dfe14, infoPort=44069, infoSecurePort=0, ipcPort=45893, storageInfo=lv=-57;cid=testClusterID;nsid=1683268905;c=1689549290377), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 23:14:52,907 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8e08fe0277c34dd7: Processing first storage report for DS-ed54e9d9-0564-439e-8a93-055a454497c4 from datanode 868ca1ba-fb9c-4bc7-9f78-8e2c4cf64012 2023-07-16 23:14:52,908 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8e08fe0277c34dd7: from storage DS-ed54e9d9-0564-439e-8a93-055a454497c4 node DatanodeRegistration(127.0.0.1:39633, datanodeUuid=868ca1ba-fb9c-4bc7-9f78-8e2c4cf64012, infoPort=35259, infoSecurePort=0, ipcPort=40131, storageInfo=lv=-57;cid=testClusterID;nsid=1683268905;c=1689549290377), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 23:14:53,129 DEBUG [Listener at localhost/40131] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4 2023-07-16 23:14:53,214 INFO [Listener at localhost/40131] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/cluster_b14fde1a-1c3e-bdee-d7b9-5694b71ef229/zookeeper_0, clientPort=63904, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/cluster_b14fde1a-1c3e-bdee-d7b9-5694b71ef229/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/cluster_b14fde1a-1c3e-bdee-d7b9-5694b71ef229/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-16 23:14:53,229 INFO [Listener at localhost/40131] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=63904 2023-07-16 23:14:53,238 INFO [Listener at localhost/40131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:14:53,240 INFO [Listener at localhost/40131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:14:53,914 INFO [Listener at localhost/40131] util.FSUtils(471): Created version file at hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002 with version=8 2023-07-16 23:14:53,914 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/hbase-staging 2023-07-16 23:14:53,923 DEBUG [Listener at localhost/40131] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-16 23:14:53,923 DEBUG [Listener at localhost/40131] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-16 23:14:53,923 DEBUG [Listener at localhost/40131] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-16 23:14:53,923 DEBUG [Listener at localhost/40131] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-16 23:14:54,289 INFO [Listener at localhost/40131] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-16 23:14:54,840 INFO [Listener at localhost/40131] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 23:14:54,888 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:14:54,888 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 23:14:54,889 INFO [Listener at localhost/40131] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 23:14:54,889 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:14:54,889 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 23:14:55,069 INFO [Listener at localhost/40131] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 23:14:55,158 DEBUG [Listener at localhost/40131] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-16 23:14:55,274 INFO [Listener at localhost/40131] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37359 2023-07-16 23:14:55,286 INFO [Listener at localhost/40131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:14:55,289 INFO [Listener at localhost/40131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:14:55,316 INFO [Listener at localhost/40131] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37359 connecting to ZooKeeper ensemble=127.0.0.1:63904 2023-07-16 23:14:55,368 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:373590x0, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 23:14:55,372 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37359-0x101706ac9920000 connected 2023-07-16 23:14:55,407 DEBUG [Listener at localhost/40131] zookeeper.ZKUtil(164): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 23:14:55,408 DEBUG [Listener at localhost/40131] zookeeper.ZKUtil(164): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:14:55,419 DEBUG [Listener at localhost/40131] zookeeper.ZKUtil(164): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 23:14:55,429 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37359 2023-07-16 23:14:55,429 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37359 2023-07-16 23:14:55,429 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37359 2023-07-16 23:14:55,430 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37359 2023-07-16 23:14:55,430 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37359 2023-07-16 23:14:55,463 INFO [Listener at localhost/40131] log.Log(170): Logging initialized @7087ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-16 23:14:55,608 INFO [Listener at localhost/40131] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 23:14:55,608 INFO [Listener at localhost/40131] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 23:14:55,609 INFO [Listener at localhost/40131] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 23:14:55,611 INFO [Listener at localhost/40131] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-16 23:14:55,611 INFO [Listener at localhost/40131] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 23:14:55,611 INFO [Listener at localhost/40131] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 23:14:55,615 INFO [Listener at localhost/40131] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 23:14:55,682 INFO [Listener at localhost/40131] http.HttpServer(1146): Jetty bound to port 33449 2023-07-16 23:14:55,684 INFO [Listener at localhost/40131] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 23:14:55,717 INFO [Listener at localhost/40131] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:14:55,721 INFO [Listener at localhost/40131] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7410039f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/hadoop.log.dir/,AVAILABLE} 2023-07-16 23:14:55,722 INFO [Listener at localhost/40131] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:14:55,722 INFO [Listener at localhost/40131] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@34fd62ed{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 23:14:55,972 INFO [Listener at localhost/40131] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 23:14:55,996 INFO [Listener at localhost/40131] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 23:14:55,997 INFO [Listener at localhost/40131] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 23:14:55,999 INFO [Listener at localhost/40131] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 23:14:56,005 INFO [Listener at localhost/40131] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:14:56,039 INFO [Listener at localhost/40131] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@55ffcf1a{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/java.io.tmpdir/jetty-0_0_0_0-33449-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6736120168095248888/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-16 23:14:56,052 INFO [Listener at localhost/40131] server.AbstractConnector(333): Started ServerConnector@2092751{HTTP/1.1, (http/1.1)}{0.0.0.0:33449} 2023-07-16 23:14:56,052 INFO [Listener at localhost/40131] server.Server(415): Started @7676ms 2023-07-16 23:14:56,056 INFO [Listener at localhost/40131] master.HMaster(444): hbase.rootdir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002, hbase.cluster.distributed=false 2023-07-16 23:14:56,126 INFO [Listener at localhost/40131] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 23:14:56,126 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:14:56,126 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 23:14:56,127 INFO [Listener at localhost/40131] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 23:14:56,127 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:14:56,127 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 23:14:56,132 INFO [Listener at localhost/40131] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 23:14:56,135 INFO [Listener at localhost/40131] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38989 2023-07-16 23:14:56,138 INFO [Listener at localhost/40131] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 23:14:56,145 DEBUG [Listener at localhost/40131] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 23:14:56,146 INFO [Listener at localhost/40131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:14:56,147 INFO [Listener at localhost/40131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:14:56,149 INFO [Listener at localhost/40131] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38989 connecting to ZooKeeper ensemble=127.0.0.1:63904 2023-07-16 23:14:56,152 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:389890x0, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 23:14:56,154 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38989-0x101706ac9920001 connected 2023-07-16 23:14:56,154 DEBUG [Listener at localhost/40131] zookeeper.ZKUtil(164): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 23:14:56,155 DEBUG [Listener at localhost/40131] zookeeper.ZKUtil(164): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:14:56,156 DEBUG [Listener at localhost/40131] zookeeper.ZKUtil(164): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 23:14:56,157 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38989 2023-07-16 23:14:56,157 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38989 2023-07-16 23:14:56,157 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38989 2023-07-16 23:14:56,158 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38989 2023-07-16 23:14:56,158 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38989 2023-07-16 23:14:56,161 INFO [Listener at localhost/40131] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 23:14:56,161 INFO [Listener at localhost/40131] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 23:14:56,161 INFO [Listener at localhost/40131] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 23:14:56,162 INFO [Listener at localhost/40131] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 23:14:56,162 INFO [Listener at localhost/40131] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 23:14:56,162 INFO [Listener at localhost/40131] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 23:14:56,163 INFO [Listener at localhost/40131] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 23:14:56,165 INFO [Listener at localhost/40131] http.HttpServer(1146): Jetty bound to port 39631 2023-07-16 23:14:56,165 INFO [Listener at localhost/40131] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 23:14:56,168 INFO [Listener at localhost/40131] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:14:56,169 INFO [Listener at localhost/40131] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@21e06b66{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/hadoop.log.dir/,AVAILABLE} 2023-07-16 23:14:56,169 INFO [Listener at localhost/40131] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:14:56,169 INFO [Listener at localhost/40131] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@655f7375{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 23:14:56,314 INFO [Listener at localhost/40131] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 23:14:56,316 INFO [Listener at localhost/40131] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 23:14:56,316 INFO [Listener at localhost/40131] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 23:14:56,317 INFO [Listener at localhost/40131] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 23:14:56,318 INFO [Listener at localhost/40131] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:14:56,322 INFO [Listener at localhost/40131] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2ade5edf{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/java.io.tmpdir/jetty-0_0_0_0-39631-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6056240204847905558/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:14:56,323 INFO [Listener at localhost/40131] server.AbstractConnector(333): Started ServerConnector@3e4a8ce8{HTTP/1.1, (http/1.1)}{0.0.0.0:39631} 2023-07-16 23:14:56,323 INFO [Listener at localhost/40131] server.Server(415): Started @7947ms 2023-07-16 23:14:56,335 INFO [Listener at localhost/40131] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 23:14:56,336 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:14:56,336 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 23:14:56,336 INFO [Listener at localhost/40131] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 23:14:56,336 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:14:56,336 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 23:14:56,337 INFO [Listener at localhost/40131] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 23:14:56,338 INFO [Listener at localhost/40131] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33913 2023-07-16 23:14:56,339 INFO [Listener at localhost/40131] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 23:14:56,340 DEBUG [Listener at localhost/40131] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 23:14:56,340 INFO [Listener at localhost/40131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:14:56,343 INFO [Listener at localhost/40131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:14:56,345 INFO [Listener at localhost/40131] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33913 connecting to ZooKeeper ensemble=127.0.0.1:63904 2023-07-16 23:14:56,348 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:339130x0, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 23:14:56,350 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33913-0x101706ac9920002 connected 2023-07-16 23:14:56,350 DEBUG [Listener at localhost/40131] zookeeper.ZKUtil(164): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 23:14:56,351 DEBUG [Listener at localhost/40131] zookeeper.ZKUtil(164): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:14:56,351 DEBUG [Listener at localhost/40131] zookeeper.ZKUtil(164): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 23:14:56,352 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33913 2023-07-16 23:14:56,352 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33913 2023-07-16 23:14:56,353 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33913 2023-07-16 23:14:56,354 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33913 2023-07-16 23:14:56,354 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33913 2023-07-16 23:14:56,356 INFO [Listener at localhost/40131] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 23:14:56,357 INFO [Listener at localhost/40131] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 23:14:56,357 INFO [Listener at localhost/40131] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 23:14:56,357 INFO [Listener at localhost/40131] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 23:14:56,358 INFO [Listener at localhost/40131] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 23:14:56,358 INFO [Listener at localhost/40131] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 23:14:56,358 INFO [Listener at localhost/40131] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 23:14:56,359 INFO [Listener at localhost/40131] http.HttpServer(1146): Jetty bound to port 41459 2023-07-16 23:14:56,359 INFO [Listener at localhost/40131] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 23:14:56,361 INFO [Listener at localhost/40131] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:14:56,361 INFO [Listener at localhost/40131] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@32f14bae{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/hadoop.log.dir/,AVAILABLE} 2023-07-16 23:14:56,362 INFO [Listener at localhost/40131] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:14:56,362 INFO [Listener at localhost/40131] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@129754f6{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 23:14:56,489 INFO [Listener at localhost/40131] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 23:14:56,490 INFO [Listener at localhost/40131] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 23:14:56,490 INFO [Listener at localhost/40131] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 23:14:56,490 INFO [Listener at localhost/40131] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 23:14:56,491 INFO [Listener at localhost/40131] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:14:56,492 INFO [Listener at localhost/40131] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@333da51b{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/java.io.tmpdir/jetty-0_0_0_0-41459-hbase-server-2_4_18-SNAPSHOT_jar-_-any-768532637802701413/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:14:56,493 INFO [Listener at localhost/40131] server.AbstractConnector(333): Started ServerConnector@25647a91{HTTP/1.1, (http/1.1)}{0.0.0.0:41459} 2023-07-16 23:14:56,494 INFO [Listener at localhost/40131] server.Server(415): Started @8118ms 2023-07-16 23:14:56,508 INFO [Listener at localhost/40131] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 23:14:56,508 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:14:56,508 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 23:14:56,508 INFO [Listener at localhost/40131] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 23:14:56,509 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:14:56,509 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 23:14:56,509 INFO [Listener at localhost/40131] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 23:14:56,511 INFO [Listener at localhost/40131] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41683 2023-07-16 23:14:56,511 INFO [Listener at localhost/40131] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 23:14:56,513 DEBUG [Listener at localhost/40131] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 23:14:56,514 INFO [Listener at localhost/40131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:14:56,515 INFO [Listener at localhost/40131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:14:56,516 INFO [Listener at localhost/40131] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41683 connecting to ZooKeeper ensemble=127.0.0.1:63904 2023-07-16 23:14:56,522 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:416830x0, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 23:14:56,523 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41683-0x101706ac9920003 connected 2023-07-16 23:14:56,523 DEBUG [Listener at localhost/40131] zookeeper.ZKUtil(164): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 23:14:56,524 DEBUG [Listener at localhost/40131] zookeeper.ZKUtil(164): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:14:56,525 DEBUG [Listener at localhost/40131] zookeeper.ZKUtil(164): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 23:14:56,525 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41683 2023-07-16 23:14:56,526 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41683 2023-07-16 23:14:56,526 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41683 2023-07-16 23:14:56,527 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41683 2023-07-16 23:14:56,527 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41683 2023-07-16 23:14:56,530 INFO [Listener at localhost/40131] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 23:14:56,530 INFO [Listener at localhost/40131] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 23:14:56,530 INFO [Listener at localhost/40131] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 23:14:56,531 INFO [Listener at localhost/40131] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 23:14:56,531 INFO [Listener at localhost/40131] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 23:14:56,531 INFO [Listener at localhost/40131] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 23:14:56,531 INFO [Listener at localhost/40131] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 23:14:56,532 INFO [Listener at localhost/40131] http.HttpServer(1146): Jetty bound to port 44099 2023-07-16 23:14:56,532 INFO [Listener at localhost/40131] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 23:14:56,534 INFO [Listener at localhost/40131] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:14:56,534 INFO [Listener at localhost/40131] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@513690f4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/hadoop.log.dir/,AVAILABLE} 2023-07-16 23:14:56,534 INFO [Listener at localhost/40131] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:14:56,535 INFO [Listener at localhost/40131] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1a254359{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 23:14:56,655 INFO [Listener at localhost/40131] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 23:14:56,656 INFO [Listener at localhost/40131] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 23:14:56,656 INFO [Listener at localhost/40131] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 23:14:56,657 INFO [Listener at localhost/40131] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 23:14:56,658 INFO [Listener at localhost/40131] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:14:56,659 INFO [Listener at localhost/40131] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@71934adb{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/java.io.tmpdir/jetty-0_0_0_0-44099-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7444293347864313067/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:14:56,660 INFO [Listener at localhost/40131] server.AbstractConnector(333): Started ServerConnector@4c5e3ae5{HTTP/1.1, (http/1.1)}{0.0.0.0:44099} 2023-07-16 23:14:56,660 INFO [Listener at localhost/40131] server.Server(415): Started @8284ms 2023-07-16 23:14:56,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 23:14:56,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@8b592d2{HTTP/1.1, (http/1.1)}{0.0.0.0:33417} 2023-07-16 23:14:56,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8297ms 2023-07-16 23:14:56,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,37359,1689549294108 2023-07-16 23:14:56,684 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 23:14:56,685 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,37359,1689549294108 2023-07-16 23:14:56,705 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 23:14:56,705 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 23:14:56,705 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 23:14:56,705 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 23:14:56,707 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:14:56,707 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 23:14:56,709 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,37359,1689549294108 from backup master directory 2023-07-16 23:14:56,709 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 23:14:56,713 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,37359,1689549294108 2023-07-16 23:14:56,713 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 23:14:56,713 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 23:14:56,714 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,37359,1689549294108 2023-07-16 23:14:56,717 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-16 23:14:56,718 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-16 23:14:56,804 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/hbase.id with ID: 70eccbdf-e919-4873-8226-1f58665f9c7c 2023-07-16 23:14:56,857 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:14:56,874 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:14:56,925 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2626cde5 to 127.0.0.1:63904 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:14:56,955 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@655f887e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:14:56,984 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 23:14:56,986 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-16 23:14:57,014 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-16 23:14:57,014 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-16 23:14:57,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-16 23:14:57,022 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-16 23:14:57,023 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 23:14:57,078 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/MasterData/data/master/store-tmp 2023-07-16 23:14:57,148 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:14:57,148 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 23:14:57,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:14:57,149 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:14:57,149 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 23:14:57,149 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:14:57,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:14:57,149 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 23:14:57,153 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/MasterData/WALs/jenkins-hbase4.apache.org,37359,1689549294108 2023-07-16 23:14:57,182 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37359%2C1689549294108, suffix=, logDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/MasterData/WALs/jenkins-hbase4.apache.org,37359,1689549294108, archiveDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/MasterData/oldWALs, maxLogs=10 2023-07-16 23:14:57,261 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39633,DS-cac95491-a5d8-4b6e-8b8f-24240dccb300,DISK] 2023-07-16 23:14:57,261 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35019,DS-7aac909c-0053-4071-bacc-86c8683b259e,DISK] 2023-07-16 23:14:57,261 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39013,DS-f0cd7a4e-c855-48a4-9ece-d5b46f489b8e,DISK] 2023-07-16 23:14:57,270 DEBUG [RS-EventLoopGroup-5-2] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-16 23:14:57,355 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/MasterData/WALs/jenkins-hbase4.apache.org,37359,1689549294108/jenkins-hbase4.apache.org%2C37359%2C1689549294108.1689549297193 2023-07-16 23:14:57,355 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39633,DS-cac95491-a5d8-4b6e-8b8f-24240dccb300,DISK], DatanodeInfoWithStorage[127.0.0.1:35019,DS-7aac909c-0053-4071-bacc-86c8683b259e,DISK], DatanodeInfoWithStorage[127.0.0.1:39013,DS-f0cd7a4e-c855-48a4-9ece-d5b46f489b8e,DISK]] 2023-07-16 23:14:57,356 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:14:57,357 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:14:57,363 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 23:14:57,365 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 23:14:57,441 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-16 23:14:57,448 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-16 23:14:57,485 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-16 23:14:57,498 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:14:57,504 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 23:14:57,506 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 23:14:57,522 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 23:14:57,525 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:14:57,526 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11188444800, jitterRate=0.04200512170791626}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:14:57,527 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 23:14:57,528 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-16 23:14:57,549 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-16 23:14:57,549 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-16 23:14:57,552 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-16 23:14:57,553 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-16 23:14:57,589 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 35 msec 2023-07-16 23:14:57,589 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-16 23:14:57,614 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-16 23:14:57,620 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-16 23:14:57,627 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-16 23:14:57,633 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-16 23:14:57,637 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-16 23:14:57,641 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:14:57,642 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-16 23:14:57,642 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-16 23:14:57,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-16 23:14:57,662 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 23:14:57,662 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 23:14:57,662 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 23:14:57,662 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 23:14:57,662 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:14:57,663 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,37359,1689549294108, sessionid=0x101706ac9920000, setting cluster-up flag (Was=false) 2023-07-16 23:14:57,685 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:14:57,695 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-16 23:14:57,697 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37359,1689549294108 2023-07-16 23:14:57,702 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:14:57,709 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-16 23:14:57,712 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37359,1689549294108 2023-07-16 23:14:57,715 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.hbase-snapshot/.tmp 2023-07-16 23:14:57,765 INFO [RS:0;jenkins-hbase4:38989] regionserver.HRegionServer(951): ClusterId : 70eccbdf-e919-4873-8226-1f58665f9c7c 2023-07-16 23:14:57,766 INFO [RS:1;jenkins-hbase4:33913] regionserver.HRegionServer(951): ClusterId : 70eccbdf-e919-4873-8226-1f58665f9c7c 2023-07-16 23:14:57,765 INFO [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer(951): ClusterId : 70eccbdf-e919-4873-8226-1f58665f9c7c 2023-07-16 23:14:57,773 DEBUG [RS:2;jenkins-hbase4:41683] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 23:14:57,773 DEBUG [RS:0;jenkins-hbase4:38989] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 23:14:57,773 DEBUG [RS:1;jenkins-hbase4:33913] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 23:14:57,781 DEBUG [RS:2;jenkins-hbase4:41683] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 23:14:57,781 DEBUG [RS:1;jenkins-hbase4:33913] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 23:14:57,782 DEBUG [RS:2;jenkins-hbase4:41683] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 23:14:57,782 DEBUG [RS:1;jenkins-hbase4:33913] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 23:14:57,786 DEBUG [RS:0;jenkins-hbase4:38989] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 23:14:57,786 DEBUG [RS:0;jenkins-hbase4:38989] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 23:14:57,786 DEBUG [RS:2;jenkins-hbase4:41683] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 23:14:57,787 DEBUG [RS:2;jenkins-hbase4:41683] zookeeper.ReadOnlyZKClient(139): Connect 0x5f627fca to 127.0.0.1:63904 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:14:57,789 DEBUG [RS:1;jenkins-hbase4:33913] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 23:14:57,791 DEBUG [RS:0;jenkins-hbase4:38989] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 23:14:57,796 DEBUG [RS:1;jenkins-hbase4:33913] zookeeper.ReadOnlyZKClient(139): Connect 0x0cb18b5c to 127.0.0.1:63904 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:14:57,796 DEBUG [RS:0;jenkins-hbase4:38989] zookeeper.ReadOnlyZKClient(139): Connect 0x0aed5709 to 127.0.0.1:63904 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:14:57,802 DEBUG [RS:2;jenkins-hbase4:41683] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5ef806dd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:14:57,803 DEBUG [RS:2;jenkins-hbase4:41683] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1cbc9cc2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 23:14:57,812 DEBUG [RS:0;jenkins-hbase4:38989] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@22479cc6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:14:57,812 DEBUG [RS:0;jenkins-hbase4:38989] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@78f21cc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 23:14:57,813 DEBUG [RS:1;jenkins-hbase4:33913] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@709931c9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:14:57,813 DEBUG [RS:1;jenkins-hbase4:33913] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@74d07178, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 23:14:57,816 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-16 23:14:57,829 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-16 23:14:57,833 DEBUG [RS:0;jenkins-hbase4:38989] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:38989 2023-07-16 23:14:57,834 DEBUG [RS:1;jenkins-hbase4:33913] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:33913 2023-07-16 23:14:57,836 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37359,1689549294108] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 23:14:57,836 DEBUG [RS:2;jenkins-hbase4:41683] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:41683 2023-07-16 23:14:57,839 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-16 23:14:57,839 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-16 23:14:57,840 INFO [RS:1;jenkins-hbase4:33913] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 23:14:57,840 INFO [RS:2;jenkins-hbase4:41683] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 23:14:57,841 INFO [RS:2;jenkins-hbase4:41683] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 23:14:57,841 INFO [RS:0;jenkins-hbase4:38989] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 23:14:57,841 INFO [RS:0;jenkins-hbase4:38989] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 23:14:57,841 DEBUG [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 23:14:57,841 INFO [RS:1;jenkins-hbase4:33913] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 23:14:57,841 DEBUG [RS:0;jenkins-hbase4:38989] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 23:14:57,842 DEBUG [RS:1;jenkins-hbase4:33913] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 23:14:57,845 INFO [RS:0;jenkins-hbase4:38989] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37359,1689549294108 with isa=jenkins-hbase4.apache.org/172.31.14.131:38989, startcode=1689549296125 2023-07-16 23:14:57,845 INFO [RS:1;jenkins-hbase4:33913] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37359,1689549294108 with isa=jenkins-hbase4.apache.org/172.31.14.131:33913, startcode=1689549296335 2023-07-16 23:14:57,845 INFO [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37359,1689549294108 with isa=jenkins-hbase4.apache.org/172.31.14.131:41683, startcode=1689549296507 2023-07-16 23:14:57,869 DEBUG [RS:2;jenkins-hbase4:41683] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 23:14:57,869 DEBUG [RS:0;jenkins-hbase4:38989] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 23:14:57,869 DEBUG [RS:1;jenkins-hbase4:33913] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 23:14:57,958 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53319, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 23:14:57,958 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55775, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 23:14:57,961 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33685, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 23:14:57,973 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:14:57,979 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-16 23:14:57,986 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:14:57,996 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:14:58,029 DEBUG [RS:1;jenkins-hbase4:33913] regionserver.HRegionServer(2830): Master is not running yet 2023-07-16 23:14:58,029 WARN [RS:1;jenkins-hbase4:33913] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-16 23:14:58,029 DEBUG [RS:0;jenkins-hbase4:38989] regionserver.HRegionServer(2830): Master is not running yet 2023-07-16 23:14:58,030 WARN [RS:0;jenkins-hbase4:38989] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-16 23:14:58,029 DEBUG [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer(2830): Master is not running yet 2023-07-16 23:14:58,030 WARN [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-16 23:14:58,044 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 23:14:58,050 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 23:14:58,051 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 23:14:58,052 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 23:14:58,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 23:14:58,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 23:14:58,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 23:14:58,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 23:14:58,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-16 23:14:58,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 23:14:58,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689549328056 2023-07-16 23:14:58,059 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-16 23:14:58,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-16 23:14:58,065 DEBUG [PEWorker-2] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 23:14:58,066 INFO [PEWorker-2] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-16 23:14:58,069 INFO [PEWorker-2] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 23:14:58,074 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-16 23:14:58,075 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-16 23:14:58,075 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-16 23:14:58,075 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-16 23:14:58,076 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,077 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-16 23:14:58,080 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-16 23:14:58,080 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-16 23:14:58,095 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-16 23:14:58,096 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-16 23:14:58,098 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689549298097,5,FailOnTimeoutGroup] 2023-07-16 23:14:58,098 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689549298098,5,FailOnTimeoutGroup] 2023-07-16 23:14:58,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-16 23:14:58,100 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,101 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,132 INFO [RS:1;jenkins-hbase4:33913] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37359,1689549294108 with isa=jenkins-hbase4.apache.org/172.31.14.131:33913, startcode=1689549296335 2023-07-16 23:14:58,132 INFO [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37359,1689549294108 with isa=jenkins-hbase4.apache.org/172.31.14.131:41683, startcode=1689549296507 2023-07-16 23:14:58,132 INFO [RS:0;jenkins-hbase4:38989] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37359,1689549294108 with isa=jenkins-hbase4.apache.org/172.31.14.131:38989, startcode=1689549296125 2023-07-16 23:14:58,171 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37359] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:14:58,177 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37359,1689549294108] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 23:14:58,179 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37359,1689549294108] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-16 23:14:58,188 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37359] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:14:58,188 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37359,1689549294108] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 23:14:58,189 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37359,1689549294108] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-16 23:14:58,192 DEBUG [RS:1;jenkins-hbase4:33913] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002 2023-07-16 23:14:58,193 DEBUG [RS:1;jenkins-hbase4:33913] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34675 2023-07-16 23:14:58,193 DEBUG [RS:1;jenkins-hbase4:33913] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33449 2023-07-16 23:14:58,193 DEBUG [RS:0;jenkins-hbase4:38989] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002 2023-07-16 23:14:58,193 DEBUG [RS:0;jenkins-hbase4:38989] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34675 2023-07-16 23:14:58,193 DEBUG [RS:0;jenkins-hbase4:38989] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33449 2023-07-16 23:14:58,193 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37359] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:14:58,201 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37359,1689549294108] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 23:14:58,201 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37359,1689549294108] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-16 23:14:58,207 DEBUG [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002 2023-07-16 23:14:58,208 DEBUG [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34675 2023-07-16 23:14:58,208 DEBUG [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33449 2023-07-16 23:14:58,212 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:14:58,213 DEBUG [RS:0;jenkins-hbase4:38989] zookeeper.ZKUtil(162): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:14:58,213 DEBUG [RS:1;jenkins-hbase4:33913] zookeeper.ZKUtil(162): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:14:58,215 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33913,1689549296335] 2023-07-16 23:14:58,215 WARN [RS:0;jenkins-hbase4:38989] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 23:14:58,216 DEBUG [RS:2;jenkins-hbase4:41683] zookeeper.ZKUtil(162): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:14:58,216 WARN [RS:2;jenkins-hbase4:41683] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 23:14:58,216 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41683,1689549296507] 2023-07-16 23:14:58,215 WARN [RS:1;jenkins-hbase4:33913] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 23:14:58,216 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38989,1689549296125] 2023-07-16 23:14:58,216 INFO [RS:2;jenkins-hbase4:41683] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 23:14:58,216 INFO [RS:0;jenkins-hbase4:38989] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 23:14:58,216 INFO [RS:1;jenkins-hbase4:33913] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 23:14:58,217 DEBUG [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/WALs/jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:14:58,217 DEBUG [RS:1;jenkins-hbase4:33913] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/WALs/jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:14:58,217 DEBUG [RS:0;jenkins-hbase4:38989] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/WALs/jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:14:58,217 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 23:14:58,218 INFO [PEWorker-2] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 23:14:58,219 INFO [PEWorker-2] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002 2023-07-16 23:14:58,245 DEBUG [RS:2;jenkins-hbase4:41683] zookeeper.ZKUtil(162): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:14:58,245 DEBUG [RS:1;jenkins-hbase4:33913] zookeeper.ZKUtil(162): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:14:58,251 DEBUG [RS:0;jenkins-hbase4:38989] zookeeper.ZKUtil(162): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:14:58,251 DEBUG [RS:2;jenkins-hbase4:41683] zookeeper.ZKUtil(162): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:14:58,251 DEBUG [RS:1;jenkins-hbase4:33913] zookeeper.ZKUtil(162): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:14:58,252 DEBUG [RS:0;jenkins-hbase4:38989] zookeeper.ZKUtil(162): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:14:58,252 DEBUG [RS:2;jenkins-hbase4:41683] zookeeper.ZKUtil(162): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:14:58,252 DEBUG [RS:0;jenkins-hbase4:38989] zookeeper.ZKUtil(162): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:14:58,253 DEBUG [RS:1;jenkins-hbase4:33913] zookeeper.ZKUtil(162): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:14:58,274 DEBUG [RS:0;jenkins-hbase4:38989] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 23:14:58,278 DEBUG [PEWorker-2] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:14:58,275 DEBUG [RS:2;jenkins-hbase4:41683] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 23:14:58,275 DEBUG [RS:1;jenkins-hbase4:33913] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 23:14:58,281 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 23:14:58,285 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/info 2023-07-16 23:14:58,286 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 23:14:58,287 INFO [RS:2;jenkins-hbase4:41683] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 23:14:58,287 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:14:58,288 INFO [RS:0;jenkins-hbase4:38989] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 23:14:58,289 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 23:14:58,287 INFO [RS:1;jenkins-hbase4:33913] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 23:14:58,292 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/rep_barrier 2023-07-16 23:14:58,292 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 23:14:58,293 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:14:58,294 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 23:14:58,296 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/table 2023-07-16 23:14:58,297 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 23:14:58,298 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:14:58,300 DEBUG [PEWorker-2] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740 2023-07-16 23:14:58,302 DEBUG [PEWorker-2] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740 2023-07-16 23:14:58,311 DEBUG [PEWorker-2] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 23:14:58,314 DEBUG [PEWorker-2] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 23:14:58,329 DEBUG [PEWorker-2] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:14:58,330 INFO [PEWorker-2] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11202647040, jitterRate=0.04332780838012695}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 23:14:58,330 DEBUG [PEWorker-2] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 23:14:58,330 DEBUG [PEWorker-2] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 23:14:58,330 INFO [PEWorker-2] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 23:14:58,330 DEBUG [PEWorker-2] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 23:14:58,330 DEBUG [PEWorker-2] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 23:14:58,330 INFO [RS:0;jenkins-hbase4:38989] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 23:14:58,331 DEBUG [PEWorker-2] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 23:14:58,330 INFO [RS:2;jenkins-hbase4:41683] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 23:14:58,330 INFO [RS:1;jenkins-hbase4:33913] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 23:14:58,332 INFO [PEWorker-2] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 23:14:58,332 DEBUG [PEWorker-2] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 23:14:58,340 DEBUG [PEWorker-2] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 23:14:58,340 INFO [PEWorker-2] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-16 23:14:58,342 INFO [RS:1;jenkins-hbase4:33913] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 23:14:58,342 INFO [RS:2;jenkins-hbase4:41683] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 23:14:58,342 INFO [RS:0;jenkins-hbase4:38989] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 23:14:58,343 INFO [RS:1;jenkins-hbase4:33913] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,343 INFO [RS:2;jenkins-hbase4:41683] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,343 INFO [RS:0;jenkins-hbase4:38989] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,347 INFO [RS:1;jenkins-hbase4:33913] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 23:14:58,347 INFO [RS:0;jenkins-hbase4:38989] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 23:14:58,347 INFO [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 23:14:58,352 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-16 23:14:58,359 INFO [RS:2;jenkins-hbase4:41683] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,359 INFO [RS:0;jenkins-hbase4:38989] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,359 INFO [RS:1;jenkins-hbase4:33913] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,360 DEBUG [RS:2;jenkins-hbase4:41683] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,360 DEBUG [RS:0;jenkins-hbase4:38989] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,360 DEBUG [RS:2;jenkins-hbase4:41683] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,360 DEBUG [RS:1;jenkins-hbase4:33913] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,360 DEBUG [RS:2;jenkins-hbase4:41683] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,361 DEBUG [RS:1;jenkins-hbase4:33913] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,361 DEBUG [RS:2;jenkins-hbase4:41683] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,361 DEBUG [RS:1;jenkins-hbase4:33913] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,361 DEBUG [RS:2;jenkins-hbase4:41683] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,361 DEBUG [RS:1;jenkins-hbase4:33913] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,361 DEBUG [RS:2;jenkins-hbase4:41683] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 23:14:58,361 DEBUG [RS:1;jenkins-hbase4:33913] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,360 DEBUG [RS:0;jenkins-hbase4:38989] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,361 DEBUG [RS:1;jenkins-hbase4:33913] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 23:14:58,361 DEBUG [RS:0;jenkins-hbase4:38989] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,361 DEBUG [RS:1;jenkins-hbase4:33913] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,361 DEBUG [RS:2;jenkins-hbase4:41683] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,361 DEBUG [RS:1;jenkins-hbase4:33913] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,361 DEBUG [RS:0;jenkins-hbase4:38989] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,361 DEBUG [RS:1;jenkins-hbase4:33913] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,362 DEBUG [RS:0;jenkins-hbase4:38989] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,362 DEBUG [RS:1;jenkins-hbase4:33913] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,362 DEBUG [RS:0;jenkins-hbase4:38989] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 23:14:58,361 DEBUG [RS:2;jenkins-hbase4:41683] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,362 DEBUG [RS:0;jenkins-hbase4:38989] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,362 DEBUG [RS:2;jenkins-hbase4:41683] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,362 DEBUG [RS:0;jenkins-hbase4:38989] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,362 DEBUG [RS:2;jenkins-hbase4:41683] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,363 DEBUG [RS:0;jenkins-hbase4:38989] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,363 DEBUG [RS:0;jenkins-hbase4:38989] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:14:58,363 INFO [RS:1;jenkins-hbase4:33913] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,364 INFO [RS:1;jenkins-hbase4:33913] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,364 INFO [RS:0;jenkins-hbase4:38989] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,364 INFO [RS:0;jenkins-hbase4:38989] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,365 INFO [RS:0;jenkins-hbase4:38989] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,365 INFO [RS:2;jenkins-hbase4:41683] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,364 INFO [RS:1;jenkins-hbase4:33913] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,365 INFO [RS:2;jenkins-hbase4:41683] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,366 INFO [RS:2;jenkins-hbase4:41683] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,373 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-16 23:14:58,381 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-16 23:14:58,388 INFO [RS:1;jenkins-hbase4:33913] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 23:14:58,388 INFO [RS:0;jenkins-hbase4:38989] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 23:14:58,388 INFO [RS:2;jenkins-hbase4:41683] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 23:14:58,392 INFO [RS:1;jenkins-hbase4:33913] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33913,1689549296335-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,392 INFO [RS:2;jenkins-hbase4:41683] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41683,1689549296507-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,392 INFO [RS:0;jenkins-hbase4:38989] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38989,1689549296125-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:58,455 INFO [RS:1;jenkins-hbase4:33913] regionserver.Replication(203): jenkins-hbase4.apache.org,33913,1689549296335 started 2023-07-16 23:14:58,455 INFO [RS:0;jenkins-hbase4:38989] regionserver.Replication(203): jenkins-hbase4.apache.org,38989,1689549296125 started 2023-07-16 23:14:58,455 INFO [RS:2;jenkins-hbase4:41683] regionserver.Replication(203): jenkins-hbase4.apache.org,41683,1689549296507 started 2023-07-16 23:14:58,455 INFO [RS:0;jenkins-hbase4:38989] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38989,1689549296125, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38989, sessionid=0x101706ac9920001 2023-07-16 23:14:58,455 INFO [RS:1;jenkins-hbase4:33913] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33913,1689549296335, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33913, sessionid=0x101706ac9920002 2023-07-16 23:14:58,455 INFO [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41683,1689549296507, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41683, sessionid=0x101706ac9920003 2023-07-16 23:14:58,455 DEBUG [RS:0;jenkins-hbase4:38989] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 23:14:58,455 DEBUG [RS:1;jenkins-hbase4:33913] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 23:14:58,455 DEBUG [RS:2;jenkins-hbase4:41683] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 23:14:58,456 DEBUG [RS:2;jenkins-hbase4:41683] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:14:58,456 DEBUG [RS:1;jenkins-hbase4:33913] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:14:58,455 DEBUG [RS:0;jenkins-hbase4:38989] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:14:58,456 DEBUG [RS:1;jenkins-hbase4:33913] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33913,1689549296335' 2023-07-16 23:14:58,456 DEBUG [RS:2;jenkins-hbase4:41683] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41683,1689549296507' 2023-07-16 23:14:58,457 DEBUG [RS:1;jenkins-hbase4:33913] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 23:14:58,457 DEBUG [RS:0;jenkins-hbase4:38989] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38989,1689549296125' 2023-07-16 23:14:58,457 DEBUG [RS:0;jenkins-hbase4:38989] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 23:14:58,457 DEBUG [RS:2;jenkins-hbase4:41683] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 23:14:58,458 DEBUG [RS:1;jenkins-hbase4:33913] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 23:14:58,458 DEBUG [RS:0;jenkins-hbase4:38989] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 23:14:58,458 DEBUG [RS:2;jenkins-hbase4:41683] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 23:14:58,458 DEBUG [RS:1;jenkins-hbase4:33913] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 23:14:58,458 DEBUG [RS:1;jenkins-hbase4:33913] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 23:14:58,458 DEBUG [RS:0;jenkins-hbase4:38989] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 23:14:58,459 DEBUG [RS:1;jenkins-hbase4:33913] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:14:58,459 DEBUG [RS:2;jenkins-hbase4:41683] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 23:14:58,459 DEBUG [RS:1;jenkins-hbase4:33913] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33913,1689549296335' 2023-07-16 23:14:58,459 DEBUG [RS:1;jenkins-hbase4:33913] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 23:14:58,459 DEBUG [RS:0;jenkins-hbase4:38989] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 23:14:58,459 DEBUG [RS:2;jenkins-hbase4:41683] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 23:14:58,459 DEBUG [RS:0;jenkins-hbase4:38989] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:14:58,459 DEBUG [RS:2;jenkins-hbase4:41683] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:14:58,460 DEBUG [RS:2;jenkins-hbase4:41683] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41683,1689549296507' 2023-07-16 23:14:58,461 DEBUG [RS:2;jenkins-hbase4:41683] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 23:14:58,460 DEBUG [RS:0;jenkins-hbase4:38989] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38989,1689549296125' 2023-07-16 23:14:58,461 DEBUG [RS:1;jenkins-hbase4:33913] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 23:14:58,461 DEBUG [RS:0;jenkins-hbase4:38989] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 23:14:58,461 DEBUG [RS:2;jenkins-hbase4:41683] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 23:14:58,461 DEBUG [RS:1;jenkins-hbase4:33913] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 23:14:58,461 DEBUG [RS:0;jenkins-hbase4:38989] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 23:14:58,461 INFO [RS:1;jenkins-hbase4:33913] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 23:14:58,462 INFO [RS:1;jenkins-hbase4:33913] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 23:14:58,462 DEBUG [RS:2;jenkins-hbase4:41683] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 23:14:58,462 DEBUG [RS:0;jenkins-hbase4:38989] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 23:14:58,462 INFO [RS:2;jenkins-hbase4:41683] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 23:14:58,462 INFO [RS:0;jenkins-hbase4:38989] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 23:14:58,462 INFO [RS:0;jenkins-hbase4:38989] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 23:14:58,462 INFO [RS:2;jenkins-hbase4:41683] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 23:14:58,533 DEBUG [jenkins-hbase4:37359] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-16 23:14:58,546 DEBUG [jenkins-hbase4:37359] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:14:58,548 DEBUG [jenkins-hbase4:37359] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:14:58,548 DEBUG [jenkins-hbase4:37359] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:14:58,548 DEBUG [jenkins-hbase4:37359] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:14:58,548 DEBUG [jenkins-hbase4:37359] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:14:58,552 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38989,1689549296125, state=OPENING 2023-07-16 23:14:58,561 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-16 23:14:58,563 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:14:58,563 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 23:14:58,567 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38989,1689549296125}] 2023-07-16 23:14:58,573 INFO [RS:0;jenkins-hbase4:38989] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38989%2C1689549296125, suffix=, logDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/WALs/jenkins-hbase4.apache.org,38989,1689549296125, archiveDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/oldWALs, maxLogs=32 2023-07-16 23:14:58,573 INFO [RS:2;jenkins-hbase4:41683] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41683%2C1689549296507, suffix=, logDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/WALs/jenkins-hbase4.apache.org,41683,1689549296507, archiveDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/oldWALs, maxLogs=32 2023-07-16 23:14:58,573 INFO [RS:1;jenkins-hbase4:33913] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33913%2C1689549296335, suffix=, logDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/WALs/jenkins-hbase4.apache.org,33913,1689549296335, archiveDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/oldWALs, maxLogs=32 2023-07-16 23:14:58,599 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39633,DS-cac95491-a5d8-4b6e-8b8f-24240dccb300,DISK] 2023-07-16 23:14:58,599 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39013,DS-f0cd7a4e-c855-48a4-9ece-d5b46f489b8e,DISK] 2023-07-16 23:14:58,606 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35019,DS-7aac909c-0053-4071-bacc-86c8683b259e,DISK] 2023-07-16 23:14:58,606 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39633,DS-cac95491-a5d8-4b6e-8b8f-24240dccb300,DISK] 2023-07-16 23:14:58,606 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35019,DS-7aac909c-0053-4071-bacc-86c8683b259e,DISK] 2023-07-16 23:14:58,606 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39013,DS-f0cd7a4e-c855-48a4-9ece-d5b46f489b8e,DISK] 2023-07-16 23:14:58,611 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39633,DS-cac95491-a5d8-4b6e-8b8f-24240dccb300,DISK] 2023-07-16 23:14:58,611 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35019,DS-7aac909c-0053-4071-bacc-86c8683b259e,DISK] 2023-07-16 23:14:58,611 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39013,DS-f0cd7a4e-c855-48a4-9ece-d5b46f489b8e,DISK] 2023-07-16 23:14:58,617 INFO [RS:2;jenkins-hbase4:41683] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/WALs/jenkins-hbase4.apache.org,41683,1689549296507/jenkins-hbase4.apache.org%2C41683%2C1689549296507.1689549298581 2023-07-16 23:14:58,617 INFO [RS:0;jenkins-hbase4:38989] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/WALs/jenkins-hbase4.apache.org,38989,1689549296125/jenkins-hbase4.apache.org%2C38989%2C1689549296125.1689549298581 2023-07-16 23:14:58,617 DEBUG [RS:2;jenkins-hbase4:41683] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39633,DS-cac95491-a5d8-4b6e-8b8f-24240dccb300,DISK], DatanodeInfoWithStorage[127.0.0.1:39013,DS-f0cd7a4e-c855-48a4-9ece-d5b46f489b8e,DISK], DatanodeInfoWithStorage[127.0.0.1:35019,DS-7aac909c-0053-4071-bacc-86c8683b259e,DISK]] 2023-07-16 23:14:58,618 DEBUG [RS:0;jenkins-hbase4:38989] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35019,DS-7aac909c-0053-4071-bacc-86c8683b259e,DISK], DatanodeInfoWithStorage[127.0.0.1:39633,DS-cac95491-a5d8-4b6e-8b8f-24240dccb300,DISK], DatanodeInfoWithStorage[127.0.0.1:39013,DS-f0cd7a4e-c855-48a4-9ece-d5b46f489b8e,DISK]] 2023-07-16 23:14:58,622 INFO [RS:1;jenkins-hbase4:33913] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/WALs/jenkins-hbase4.apache.org,33913,1689549296335/jenkins-hbase4.apache.org%2C33913%2C1689549296335.1689549298581 2023-07-16 23:14:58,622 DEBUG [RS:1;jenkins-hbase4:33913] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39633,DS-cac95491-a5d8-4b6e-8b8f-24240dccb300,DISK], DatanodeInfoWithStorage[127.0.0.1:35019,DS-7aac909c-0053-4071-bacc-86c8683b259e,DISK], DatanodeInfoWithStorage[127.0.0.1:39013,DS-f0cd7a4e-c855-48a4-9ece-d5b46f489b8e,DISK]] 2023-07-16 23:14:58,652 WARN [ReadOnlyZKClient-127.0.0.1:63904@0x2626cde5] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-16 23:14:58,684 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37359,1689549294108] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 23:14:58,688 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59138, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 23:14:58,689 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38989] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:59138 deadline: 1689549358688, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:14:58,750 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:14:58,754 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 23:14:58,759 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59146, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 23:14:58,777 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-16 23:14:58,778 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 23:14:58,781 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38989%2C1689549296125.meta, suffix=.meta, logDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/WALs/jenkins-hbase4.apache.org,38989,1689549296125, archiveDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/oldWALs, maxLogs=32 2023-07-16 23:14:58,807 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35019,DS-7aac909c-0053-4071-bacc-86c8683b259e,DISK] 2023-07-16 23:14:58,808 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39013,DS-f0cd7a4e-c855-48a4-9ece-d5b46f489b8e,DISK] 2023-07-16 23:14:58,807 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39633,DS-cac95491-a5d8-4b6e-8b8f-24240dccb300,DISK] 2023-07-16 23:14:58,814 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/WALs/jenkins-hbase4.apache.org,38989,1689549296125/jenkins-hbase4.apache.org%2C38989%2C1689549296125.meta.1689549298783.meta 2023-07-16 23:14:58,814 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35019,DS-7aac909c-0053-4071-bacc-86c8683b259e,DISK], DatanodeInfoWithStorage[127.0.0.1:39633,DS-cac95491-a5d8-4b6e-8b8f-24240dccb300,DISK], DatanodeInfoWithStorage[127.0.0.1:39013,DS-f0cd7a4e-c855-48a4-9ece-d5b46f489b8e,DISK]] 2023-07-16 23:14:58,815 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:14:58,816 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 23:14:58,819 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-16 23:14:58,821 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-16 23:14:58,826 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-16 23:14:58,826 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:14:58,826 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-16 23:14:58,826 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-16 23:14:58,832 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 23:14:58,834 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/info 2023-07-16 23:14:58,834 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/info 2023-07-16 23:14:58,835 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 23:14:58,836 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:14:58,836 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 23:14:58,838 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/rep_barrier 2023-07-16 23:14:58,838 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/rep_barrier 2023-07-16 23:14:58,838 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 23:14:58,839 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:14:58,839 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 23:14:58,840 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/table 2023-07-16 23:14:58,841 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/table 2023-07-16 23:14:58,841 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 23:14:58,842 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:14:58,844 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740 2023-07-16 23:14:58,848 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740 2023-07-16 23:14:58,853 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 23:14:58,855 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 23:14:58,857 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9853595680, jitterRate=-0.08231239020824432}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 23:14:58,857 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 23:14:58,867 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689549298746 2023-07-16 23:14:58,887 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-16 23:14:58,888 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-16 23:14:58,893 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38989,1689549296125, state=OPEN 2023-07-16 23:14:58,896 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 23:14:58,897 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 23:14:58,902 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-16 23:14:58,902 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38989,1689549296125 in 330 msec 2023-07-16 23:14:58,908 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-16 23:14:58,908 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 552 msec 2023-07-16 23:14:58,914 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.0630 sec 2023-07-16 23:14:58,914 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689549298914, completionTime=-1 2023-07-16 23:14:58,914 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-16 23:14:58,915 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-16 23:14:59,016 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-16 23:14:59,016 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689549359016 2023-07-16 23:14:59,016 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689549419016 2023-07-16 23:14:59,017 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 101 msec 2023-07-16 23:14:59,052 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37359,1689549294108-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:59,052 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37359,1689549294108-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:59,052 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37359,1689549294108-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:59,055 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:37359, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:59,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-16 23:14:59,067 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-16 23:14:59,076 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-16 23:14:59,079 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 23:14:59,092 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-16 23:14:59,098 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 23:14:59,103 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 23:14:59,126 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/hbase/namespace/246728e01e8e564172b05cb8c4263f93 2023-07-16 23:14:59,132 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/hbase/namespace/246728e01e8e564172b05cb8c4263f93 empty. 2023-07-16 23:14:59,133 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/hbase/namespace/246728e01e8e564172b05cb8c4263f93 2023-07-16 23:14:59,133 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-16 23:14:59,207 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-16 23:14:59,208 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37359,1689549294108] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 23:14:59,210 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37359,1689549294108] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-16 23:14:59,212 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 246728e01e8e564172b05cb8c4263f93, NAME => 'hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:14:59,213 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 23:14:59,215 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 23:14:59,220 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2 2023-07-16 23:14:59,222 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2 empty. 2023-07-16 23:14:59,223 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2 2023-07-16 23:14:59,223 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-16 23:14:59,324 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:14:59,329 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 246728e01e8e564172b05cb8c4263f93, disabling compactions & flushes 2023-07-16 23:14:59,329 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. 2023-07-16 23:14:59,329 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. 2023-07-16 23:14:59,329 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. after waiting 0 ms 2023-07-16 23:14:59,329 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. 2023-07-16 23:14:59,329 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. 2023-07-16 23:14:59,330 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 246728e01e8e564172b05cb8c4263f93: 2023-07-16 23:14:59,332 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-16 23:14:59,336 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 898ed5e7258b3e0527188384fae4bfe2, NAME => 'hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:14:59,337 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 23:14:59,367 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689549299341"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549299341"}]},"ts":"1689549299341"} 2023-07-16 23:14:59,389 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:14:59,390 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 898ed5e7258b3e0527188384fae4bfe2, disabling compactions & flushes 2023-07-16 23:14:59,390 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. 2023-07-16 23:14:59,390 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. 2023-07-16 23:14:59,390 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. after waiting 0 ms 2023-07-16 23:14:59,390 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. 2023-07-16 23:14:59,390 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. 2023-07-16 23:14:59,390 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 898ed5e7258b3e0527188384fae4bfe2: 2023-07-16 23:14:59,399 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 23:14:59,401 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689549299400"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549299400"}]},"ts":"1689549299400"} 2023-07-16 23:14:59,426 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 23:14:59,428 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 23:14:59,432 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 23:14:59,435 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 23:14:59,435 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549299435"}]},"ts":"1689549299435"} 2023-07-16 23:14:59,436 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549299429"}]},"ts":"1689549299429"} 2023-07-16 23:14:59,446 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-16 23:14:59,448 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-16 23:14:59,452 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:14:59,453 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:14:59,453 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:14:59,453 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:14:59,453 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:14:59,456 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:14:59,456 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:14:59,456 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:14:59,456 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:14:59,456 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:14:59,456 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=898ed5e7258b3e0527188384fae4bfe2, ASSIGN}] 2023-07-16 23:14:59,456 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=246728e01e8e564172b05cb8c4263f93, ASSIGN}] 2023-07-16 23:14:59,460 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=246728e01e8e564172b05cb8c4263f93, ASSIGN 2023-07-16 23:14:59,462 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=246728e01e8e564172b05cb8c4263f93, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41683,1689549296507; forceNewPlan=false, retain=false 2023-07-16 23:14:59,468 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=898ed5e7258b3e0527188384fae4bfe2, ASSIGN 2023-07-16 23:14:59,471 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=898ed5e7258b3e0527188384fae4bfe2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38989,1689549296125; forceNewPlan=false, retain=false 2023-07-16 23:14:59,472 INFO [jenkins-hbase4:37359] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-16 23:14:59,476 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=898ed5e7258b3e0527188384fae4bfe2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:14:59,476 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689549299475"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549299475"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549299475"}]},"ts":"1689549299475"} 2023-07-16 23:14:59,476 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=246728e01e8e564172b05cb8c4263f93, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:14:59,477 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689549299475"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549299475"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549299475"}]},"ts":"1689549299475"} 2023-07-16 23:14:59,479 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 898ed5e7258b3e0527188384fae4bfe2, server=jenkins-hbase4.apache.org,38989,1689549296125}] 2023-07-16 23:14:59,485 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 246728e01e8e564172b05cb8c4263f93, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:14:59,639 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:14:59,639 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 23:14:59,643 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49446, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 23:14:59,644 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. 2023-07-16 23:14:59,644 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 898ed5e7258b3e0527188384fae4bfe2, NAME => 'hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:14:59,645 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 23:14:59,645 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. service=MultiRowMutationService 2023-07-16 23:14:59,646 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-16 23:14:59,647 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 898ed5e7258b3e0527188384fae4bfe2 2023-07-16 23:14:59,647 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:14:59,647 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 898ed5e7258b3e0527188384fae4bfe2 2023-07-16 23:14:59,647 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 898ed5e7258b3e0527188384fae4bfe2 2023-07-16 23:14:59,650 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. 2023-07-16 23:14:59,650 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 246728e01e8e564172b05cb8c4263f93, NAME => 'hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:14:59,650 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 246728e01e8e564172b05cb8c4263f93 2023-07-16 23:14:59,651 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:14:59,651 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 246728e01e8e564172b05cb8c4263f93 2023-07-16 23:14:59,651 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 246728e01e8e564172b05cb8c4263f93 2023-07-16 23:14:59,655 INFO [StoreOpener-898ed5e7258b3e0527188384fae4bfe2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 898ed5e7258b3e0527188384fae4bfe2 2023-07-16 23:14:59,656 INFO [StoreOpener-246728e01e8e564172b05cb8c4263f93-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 246728e01e8e564172b05cb8c4263f93 2023-07-16 23:14:59,658 DEBUG [StoreOpener-898ed5e7258b3e0527188384fae4bfe2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2/m 2023-07-16 23:14:59,658 DEBUG [StoreOpener-898ed5e7258b3e0527188384fae4bfe2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2/m 2023-07-16 23:14:59,658 DEBUG [StoreOpener-246728e01e8e564172b05cb8c4263f93-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/namespace/246728e01e8e564172b05cb8c4263f93/info 2023-07-16 23:14:59,659 DEBUG [StoreOpener-246728e01e8e564172b05cb8c4263f93-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/namespace/246728e01e8e564172b05cb8c4263f93/info 2023-07-16 23:14:59,659 INFO [StoreOpener-898ed5e7258b3e0527188384fae4bfe2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 898ed5e7258b3e0527188384fae4bfe2 columnFamilyName m 2023-07-16 23:14:59,659 INFO [StoreOpener-246728e01e8e564172b05cb8c4263f93-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 246728e01e8e564172b05cb8c4263f93 columnFamilyName info 2023-07-16 23:14:59,660 INFO [StoreOpener-898ed5e7258b3e0527188384fae4bfe2-1] regionserver.HStore(310): Store=898ed5e7258b3e0527188384fae4bfe2/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:14:59,661 INFO [StoreOpener-246728e01e8e564172b05cb8c4263f93-1] regionserver.HStore(310): Store=246728e01e8e564172b05cb8c4263f93/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:14:59,662 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/namespace/246728e01e8e564172b05cb8c4263f93 2023-07-16 23:14:59,662 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2 2023-07-16 23:14:59,663 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/namespace/246728e01e8e564172b05cb8c4263f93 2023-07-16 23:14:59,663 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2 2023-07-16 23:14:59,668 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 246728e01e8e564172b05cb8c4263f93 2023-07-16 23:14:59,668 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 898ed5e7258b3e0527188384fae4bfe2 2023-07-16 23:14:59,674 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/namespace/246728e01e8e564172b05cb8c4263f93/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:14:59,674 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:14:59,675 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 246728e01e8e564172b05cb8c4263f93; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10253785120, jitterRate=-0.04504184424877167}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:14:59,675 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 246728e01e8e564172b05cb8c4263f93: 2023-07-16 23:14:59,675 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 898ed5e7258b3e0527188384fae4bfe2; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@ec09040, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:14:59,676 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 898ed5e7258b3e0527188384fae4bfe2: 2023-07-16 23:14:59,677 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93., pid=9, masterSystemTime=1689549299639 2023-07-16 23:14:59,679 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2., pid=8, masterSystemTime=1689549299636 2023-07-16 23:14:59,682 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. 2023-07-16 23:14:59,682 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. 2023-07-16 23:14:59,684 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=246728e01e8e564172b05cb8c4263f93, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:14:59,684 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. 2023-07-16 23:14:59,684 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689549299683"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549299683"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549299683"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549299683"}]},"ts":"1689549299683"} 2023-07-16 23:14:59,684 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. 2023-07-16 23:14:59,685 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=898ed5e7258b3e0527188384fae4bfe2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:14:59,686 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689549299685"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549299685"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549299685"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549299685"}]},"ts":"1689549299685"} 2023-07-16 23:14:59,693 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-16 23:14:59,693 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 246728e01e8e564172b05cb8c4263f93, server=jenkins-hbase4.apache.org,41683,1689549296507 in 203 msec 2023-07-16 23:14:59,696 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-16 23:14:59,699 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 898ed5e7258b3e0527188384fae4bfe2, server=jenkins-hbase4.apache.org,38989,1689549296125 in 212 msec 2023-07-16 23:14:59,700 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=4 2023-07-16 23:14:59,701 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=246728e01e8e564172b05cb8c4263f93, ASSIGN in 237 msec 2023-07-16 23:14:59,702 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 23:14:59,703 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549299702"}]},"ts":"1689549299702"} 2023-07-16 23:14:59,704 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-16 23:14:59,704 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=898ed5e7258b3e0527188384fae4bfe2, ASSIGN in 243 msec 2023-07-16 23:14:59,706 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 23:14:59,706 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549299706"}]},"ts":"1689549299706"} 2023-07-16 23:14:59,706 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-16 23:14:59,709 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-16 23:14:59,721 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 23:14:59,721 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 23:14:59,733 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 513 msec 2023-07-16 23:14:59,733 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 641 msec 2023-07-16 23:14:59,801 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-16 23:14:59,803 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-16 23:14:59,803 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:14:59,832 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 23:14:59,834 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49456, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 23:14:59,851 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37359,1689549294108] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-16 23:14:59,851 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37359,1689549294108] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-16 23:14:59,855 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-16 23:14:59,881 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 23:14:59,890 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 45 msec 2023-07-16 23:14:59,899 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-16 23:14:59,912 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 23:14:59,918 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 19 msec 2023-07-16 23:14:59,929 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-16 23:14:59,932 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-16 23:14:59,932 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.218sec 2023-07-16 23:14:59,934 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:14:59,934 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37359,1689549294108] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:14:59,935 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-16 23:14:59,936 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-16 23:14:59,936 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-16 23:14:59,938 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37359,1689549294108-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-16 23:14:59,939 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37359,1689549294108-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-16 23:14:59,948 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37359,1689549294108] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 23:14:59,949 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-16 23:14:59,954 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37359,1689549294108] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-16 23:14:59,979 DEBUG [Listener at localhost/40131] zookeeper.ReadOnlyZKClient(139): Connect 0x5b290534 to 127.0.0.1:63904 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:14:59,985 DEBUG [Listener at localhost/40131] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5cd7bfaa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:15:00,005 DEBUG [hconnection-0x29a77039-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 23:15:00,018 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59156, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 23:15:00,028 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,37359,1689549294108 2023-07-16 23:15:00,030 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:00,039 DEBUG [Listener at localhost/40131] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-16 23:15:00,043 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42846, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-16 23:15:00,062 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-16 23:15:00,062 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:00,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-16 23:15:00,074 DEBUG [Listener at localhost/40131] zookeeper.ReadOnlyZKClient(139): Connect 0x63acd4d1 to 127.0.0.1:63904 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:15:00,081 DEBUG [Listener at localhost/40131] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@10a44849, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:15:00,082 INFO [Listener at localhost/40131] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:63904 2023-07-16 23:15:00,085 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 23:15:00,085 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101706ac992000a connected 2023-07-16 23:15:00,121 INFO [Listener at localhost/40131] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=423, OpenFileDescriptor=698, MaxFileDescriptor=60000, SystemLoadAverage=432, ProcessCount=178, AvailableMemoryMB=3760 2023-07-16 23:15:00,124 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-16 23:15:00,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:00,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:00,202 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-16 23:15:00,218 INFO [Listener at localhost/40131] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 23:15:00,218 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:00,218 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:00,218 INFO [Listener at localhost/40131] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 23:15:00,219 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:00,219 INFO [Listener at localhost/40131] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 23:15:00,219 INFO [Listener at localhost/40131] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 23:15:00,223 INFO [Listener at localhost/40131] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43561 2023-07-16 23:15:00,224 INFO [Listener at localhost/40131] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 23:15:00,225 DEBUG [Listener at localhost/40131] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 23:15:00,227 INFO [Listener at localhost/40131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:00,236 INFO [Listener at localhost/40131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:00,241 INFO [Listener at localhost/40131] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43561 connecting to ZooKeeper ensemble=127.0.0.1:63904 2023-07-16 23:15:00,250 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:435610x0, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 23:15:00,251 DEBUG [Listener at localhost/40131] zookeeper.ZKUtil(162): regionserver:435610x0, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 23:15:00,252 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43561-0x101706ac992000b connected 2023-07-16 23:15:00,253 DEBUG [Listener at localhost/40131] zookeeper.ZKUtil(162): regionserver:43561-0x101706ac992000b, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-16 23:15:00,254 DEBUG [Listener at localhost/40131] zookeeper.ZKUtil(164): regionserver:43561-0x101706ac992000b, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 23:15:00,259 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43561 2023-07-16 23:15:00,259 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43561 2023-07-16 23:15:00,263 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43561 2023-07-16 23:15:00,263 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43561 2023-07-16 23:15:00,264 DEBUG [Listener at localhost/40131] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43561 2023-07-16 23:15:00,266 INFO [Listener at localhost/40131] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 23:15:00,267 INFO [Listener at localhost/40131] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 23:15:00,267 INFO [Listener at localhost/40131] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 23:15:00,267 INFO [Listener at localhost/40131] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 23:15:00,268 INFO [Listener at localhost/40131] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 23:15:00,268 INFO [Listener at localhost/40131] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 23:15:00,268 INFO [Listener at localhost/40131] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 23:15:00,269 INFO [Listener at localhost/40131] http.HttpServer(1146): Jetty bound to port 41531 2023-07-16 23:15:00,269 INFO [Listener at localhost/40131] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 23:15:00,275 INFO [Listener at localhost/40131] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:00,275 INFO [Listener at localhost/40131] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@252d6de0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/hadoop.log.dir/,AVAILABLE} 2023-07-16 23:15:00,275 INFO [Listener at localhost/40131] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:00,276 INFO [Listener at localhost/40131] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1439103a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 23:15:00,420 INFO [Listener at localhost/40131] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 23:15:00,421 INFO [Listener at localhost/40131] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 23:15:00,421 INFO [Listener at localhost/40131] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 23:15:00,422 INFO [Listener at localhost/40131] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 23:15:00,423 INFO [Listener at localhost/40131] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:00,424 INFO [Listener at localhost/40131] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6c9a0ae6{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/java.io.tmpdir/jetty-0_0_0_0-41531-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2220814452634896885/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:15:00,426 INFO [Listener at localhost/40131] server.AbstractConnector(333): Started ServerConnector@4c8b9b{HTTP/1.1, (http/1.1)}{0.0.0.0:41531} 2023-07-16 23:15:00,426 INFO [Listener at localhost/40131] server.Server(415): Started @12050ms 2023-07-16 23:15:00,431 INFO [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer(951): ClusterId : 70eccbdf-e919-4873-8226-1f58665f9c7c 2023-07-16 23:15:00,432 DEBUG [RS:3;jenkins-hbase4:43561] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 23:15:00,439 DEBUG [RS:3;jenkins-hbase4:43561] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 23:15:00,439 DEBUG [RS:3;jenkins-hbase4:43561] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 23:15:00,442 DEBUG [RS:3;jenkins-hbase4:43561] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 23:15:00,445 DEBUG [RS:3;jenkins-hbase4:43561] zookeeper.ReadOnlyZKClient(139): Connect 0x2a199d1b to 127.0.0.1:63904 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:15:00,470 DEBUG [RS:3;jenkins-hbase4:43561] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@24de7628, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:15:00,471 DEBUG [RS:3;jenkins-hbase4:43561] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@20e1f66b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 23:15:00,483 DEBUG [RS:3;jenkins-hbase4:43561] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:43561 2023-07-16 23:15:00,483 INFO [RS:3;jenkins-hbase4:43561] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 23:15:00,484 INFO [RS:3;jenkins-hbase4:43561] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 23:15:00,484 DEBUG [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 23:15:00,485 INFO [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37359,1689549294108 with isa=jenkins-hbase4.apache.org/172.31.14.131:43561, startcode=1689549300217 2023-07-16 23:15:00,485 DEBUG [RS:3;jenkins-hbase4:43561] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 23:15:00,491 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41295, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 23:15:00,492 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37359] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:00,492 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37359,1689549294108] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 23:15:00,495 DEBUG [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002 2023-07-16 23:15:00,495 DEBUG [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34675 2023-07-16 23:15:00,495 DEBUG [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33449 2023-07-16 23:15:00,505 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:00,505 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:00,505 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:00,505 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:00,508 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37359,1689549294108] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:00,508 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:00,508 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:00,508 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:00,508 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:00,509 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:00,508 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:00,509 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:00,509 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37359,1689549294108] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 23:15:00,509 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:00,513 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:00,514 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:00,518 DEBUG [RS:3;jenkins-hbase4:43561] zookeeper.ZKUtil(162): regionserver:43561-0x101706ac992000b, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:00,518 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37359,1689549294108] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-16 23:15:00,518 WARN [RS:3;jenkins-hbase4:43561] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 23:15:00,518 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:00,518 INFO [RS:3;jenkins-hbase4:43561] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 23:15:00,518 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:00,519 DEBUG [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/WALs/jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:00,518 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43561,1689549300217] 2023-07-16 23:15:00,530 DEBUG [RS:3;jenkins-hbase4:43561] zookeeper.ZKUtil(162): regionserver:43561-0x101706ac992000b, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:00,530 DEBUG [RS:3;jenkins-hbase4:43561] zookeeper.ZKUtil(162): regionserver:43561-0x101706ac992000b, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:00,531 DEBUG [RS:3;jenkins-hbase4:43561] zookeeper.ZKUtil(162): regionserver:43561-0x101706ac992000b, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:00,532 DEBUG [RS:3;jenkins-hbase4:43561] zookeeper.ZKUtil(162): regionserver:43561-0x101706ac992000b, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:00,534 DEBUG [RS:3;jenkins-hbase4:43561] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 23:15:00,534 INFO [RS:3;jenkins-hbase4:43561] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 23:15:00,537 INFO [RS:3;jenkins-hbase4:43561] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 23:15:00,537 INFO [RS:3;jenkins-hbase4:43561] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 23:15:00,537 INFO [RS:3;jenkins-hbase4:43561] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:00,538 INFO [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 23:15:00,540 INFO [RS:3;jenkins-hbase4:43561] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:00,541 DEBUG [RS:3;jenkins-hbase4:43561] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:00,541 DEBUG [RS:3;jenkins-hbase4:43561] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:00,541 DEBUG [RS:3;jenkins-hbase4:43561] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:00,541 DEBUG [RS:3;jenkins-hbase4:43561] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:00,541 DEBUG [RS:3;jenkins-hbase4:43561] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:00,541 DEBUG [RS:3;jenkins-hbase4:43561] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 23:15:00,541 DEBUG [RS:3;jenkins-hbase4:43561] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:00,542 DEBUG [RS:3;jenkins-hbase4:43561] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:00,542 DEBUG [RS:3;jenkins-hbase4:43561] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:00,542 DEBUG [RS:3;jenkins-hbase4:43561] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:00,556 INFO [RS:3;jenkins-hbase4:43561] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:00,556 INFO [RS:3;jenkins-hbase4:43561] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:00,556 INFO [RS:3;jenkins-hbase4:43561] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:00,587 INFO [RS:3;jenkins-hbase4:43561] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 23:15:00,587 INFO [RS:3;jenkins-hbase4:43561] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43561,1689549300217-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:00,614 INFO [RS:3;jenkins-hbase4:43561] regionserver.Replication(203): jenkins-hbase4.apache.org,43561,1689549300217 started 2023-07-16 23:15:00,615 INFO [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43561,1689549300217, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43561, sessionid=0x101706ac992000b 2023-07-16 23:15:00,615 DEBUG [RS:3;jenkins-hbase4:43561] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 23:15:00,615 DEBUG [RS:3;jenkins-hbase4:43561] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:00,615 DEBUG [RS:3;jenkins-hbase4:43561] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43561,1689549300217' 2023-07-16 23:15:00,615 DEBUG [RS:3;jenkins-hbase4:43561] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 23:15:00,616 DEBUG [RS:3;jenkins-hbase4:43561] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 23:15:00,616 DEBUG [RS:3;jenkins-hbase4:43561] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 23:15:00,616 DEBUG [RS:3;jenkins-hbase4:43561] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 23:15:00,616 DEBUG [RS:3;jenkins-hbase4:43561] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:00,616 DEBUG [RS:3;jenkins-hbase4:43561] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43561,1689549300217' 2023-07-16 23:15:00,616 DEBUG [RS:3;jenkins-hbase4:43561] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 23:15:00,617 DEBUG [RS:3;jenkins-hbase4:43561] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 23:15:00,618 DEBUG [RS:3;jenkins-hbase4:43561] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 23:15:00,618 INFO [RS:3;jenkins-hbase4:43561] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 23:15:00,618 INFO [RS:3;jenkins-hbase4:43561] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 23:15:00,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:00,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:00,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:00,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:00,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:00,637 DEBUG [hconnection-0x2cf74ee0-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 23:15:00,641 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54064, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 23:15:00,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:00,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:00,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37359] to rsgroup master 2023-07-16 23:15:00,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:00,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:42846 deadline: 1689550500663, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. 2023-07-16 23:15:00,665 WARN [Listener at localhost/40131] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:00,667 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:00,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:00,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:00,670 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989, jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:43561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:00,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:00,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:00,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:00,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:00,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:00,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:00,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:00,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:00,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:00,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:00,722 INFO [RS:3;jenkins-hbase4:43561] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43561%2C1689549300217, suffix=, logDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/WALs/jenkins-hbase4.apache.org,43561,1689549300217, archiveDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/oldWALs, maxLogs=32 2023-07-16 23:15:00,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:00,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:00,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989] to rsgroup Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:00,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:00,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:00,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:00,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:00,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(238): Moving server region 898ed5e7258b3e0527188384fae4bfe2, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:00,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=898ed5e7258b3e0527188384fae4bfe2, REOPEN/MOVE 2023-07-16 23:15:00,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:00,751 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=898ed5e7258b3e0527188384fae4bfe2, REOPEN/MOVE 2023-07-16 23:15:00,752 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39013,DS-f0cd7a4e-c855-48a4-9ece-d5b46f489b8e,DISK] 2023-07-16 23:15:00,759 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35019,DS-7aac909c-0053-4071-bacc-86c8683b259e,DISK] 2023-07-16 23:15:00,759 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39633,DS-cac95491-a5d8-4b6e-8b8f-24240dccb300,DISK] 2023-07-16 23:15:00,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-16 23:15:00,761 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=898ed5e7258b3e0527188384fae4bfe2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:00,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-16 23:15:00,761 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689549300761"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549300761"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549300761"}]},"ts":"1689549300761"} 2023-07-16 23:15:00,765 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure 898ed5e7258b3e0527188384fae4bfe2, server=jenkins-hbase4.apache.org,38989,1689549296125}] 2023-07-16 23:15:00,794 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-16 23:15:00,803 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38989,1689549296125, state=CLOSING 2023-07-16 23:15:00,803 INFO [RS:3;jenkins-hbase4:43561] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/WALs/jenkins-hbase4.apache.org,43561,1689549300217/jenkins-hbase4.apache.org%2C43561%2C1689549300217.1689549300723 2023-07-16 23:15:00,805 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 23:15:00,805 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 23:15:00,805 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38989,1689549296125}] 2023-07-16 23:15:00,812 DEBUG [RS:3;jenkins-hbase4:43561] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39013,DS-f0cd7a4e-c855-48a4-9ece-d5b46f489b8e,DISK], DatanodeInfoWithStorage[127.0.0.1:35019,DS-7aac909c-0053-4071-bacc-86c8683b259e,DISK], DatanodeInfoWithStorage[127.0.0.1:39633,DS-cac95491-a5d8-4b6e-8b8f-24240dccb300,DISK]] 2023-07-16 23:15:00,967 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-16 23:15:00,967 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 898ed5e7258b3e0527188384fae4bfe2 2023-07-16 23:15:00,968 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 23:15:00,970 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 898ed5e7258b3e0527188384fae4bfe2, disabling compactions & flushes 2023-07-16 23:15:00,970 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 23:15:00,970 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 23:15:00,970 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. 2023-07-16 23:15:00,970 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 23:15:00,970 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. 2023-07-16 23:15:00,970 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 23:15:00,970 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. after waiting 0 ms 2023-07-16 23:15:00,970 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. 2023-07-16 23:15:00,972 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.85 KB heapSize=5.58 KB 2023-07-16 23:15:00,972 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 898ed5e7258b3e0527188384fae4bfe2 1/1 column families, dataSize=1.38 KB heapSize=2.37 KB 2023-07-16 23:15:01,131 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.67 KB at sequenceid=15 (bloomFilter=false), to=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/.tmp/info/ac65615b493b44108d3d175f03030d0e 2023-07-16 23:15:01,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2/.tmp/m/9e26e015c74246d9a1c8fb189e42a1b1 2023-07-16 23:15:01,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2/.tmp/m/9e26e015c74246d9a1c8fb189e42a1b1 as hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2/m/9e26e015c74246d9a1c8fb189e42a1b1 2023-07-16 23:15:01,240 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2/m/9e26e015c74246d9a1c8fb189e42a1b1, entries=3, sequenceid=9, filesize=5.2 K 2023-07-16 23:15:01,250 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1418, heapSize ~2.35 KB/2408, currentSize=0 B/0 for 898ed5e7258b3e0527188384fae4bfe2 in 279ms, sequenceid=9, compaction requested=false 2023-07-16 23:15:01,252 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-16 23:15:01,278 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-16 23:15:01,281 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 23:15:01,281 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. 2023-07-16 23:15:01,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 898ed5e7258b3e0527188384fae4bfe2: 2023-07-16 23:15:01,282 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 898ed5e7258b3e0527188384fae4bfe2 move to jenkins-hbase4.apache.org,43561,1689549300217 record at close sequenceid=9 2023-07-16 23:15:01,289 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure 898ed5e7258b3e0527188384fae4bfe2, server=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:01,290 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 898ed5e7258b3e0527188384fae4bfe2 2023-07-16 23:15:01,291 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=15 (bloomFilter=false), to=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/.tmp/table/663480be65c24603b77db17a1d90ae01 2023-07-16 23:15:01,301 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/.tmp/info/ac65615b493b44108d3d175f03030d0e as hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/info/ac65615b493b44108d3d175f03030d0e 2023-07-16 23:15:01,310 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/info/ac65615b493b44108d3d175f03030d0e, entries=21, sequenceid=15, filesize=7.1 K 2023-07-16 23:15:01,313 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/.tmp/table/663480be65c24603b77db17a1d90ae01 as hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/table/663480be65c24603b77db17a1d90ae01 2023-07-16 23:15:01,323 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/table/663480be65c24603b77db17a1d90ae01, entries=4, sequenceid=15, filesize=4.8 K 2023-07-16 23:15:01,326 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.85 KB/2916, heapSize ~5.30 KB/5424, currentSize=0 B/0 for 1588230740 in 355ms, sequenceid=15, compaction requested=false 2023-07-16 23:15:01,326 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-16 23:15:01,347 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/recovered.edits/18.seqid, newMaxSeqId=18, maxSeqId=1 2023-07-16 23:15:01,348 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 23:15:01,349 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 23:15:01,349 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 23:15:01,349 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,43561,1689549300217 record at close sequenceid=15 2023-07-16 23:15:01,355 WARN [PEWorker-4] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-16 23:15:01,358 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-16 23:15:01,359 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-16 23:15:01,359 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38989,1689549296125 in 550 msec 2023-07-16 23:15:01,360 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43561,1689549300217; forceNewPlan=false, retain=false 2023-07-16 23:15:01,510 INFO [jenkins-hbase4:37359] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 23:15:01,511 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43561,1689549300217, state=OPENING 2023-07-16 23:15:01,512 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 23:15:01,512 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 23:15:01,512 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:01,668 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:01,668 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 23:15:01,671 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35148, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 23:15:01,677 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-16 23:15:01,678 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 23:15:01,680 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43561%2C1689549300217.meta, suffix=.meta, logDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/WALs/jenkins-hbase4.apache.org,43561,1689549300217, archiveDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/oldWALs, maxLogs=32 2023-07-16 23:15:01,701 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35019,DS-7aac909c-0053-4071-bacc-86c8683b259e,DISK] 2023-07-16 23:15:01,705 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39633,DS-cac95491-a5d8-4b6e-8b8f-24240dccb300,DISK] 2023-07-16 23:15:01,714 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39013,DS-f0cd7a4e-c855-48a4-9ece-d5b46f489b8e,DISK] 2023-07-16 23:15:01,719 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/WALs/jenkins-hbase4.apache.org,43561,1689549300217/jenkins-hbase4.apache.org%2C43561%2C1689549300217.meta.1689549301681.meta 2023-07-16 23:15:01,719 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35019,DS-7aac909c-0053-4071-bacc-86c8683b259e,DISK], DatanodeInfoWithStorage[127.0.0.1:39633,DS-cac95491-a5d8-4b6e-8b8f-24240dccb300,DISK], DatanodeInfoWithStorage[127.0.0.1:39013,DS-f0cd7a4e-c855-48a4-9ece-d5b46f489b8e,DISK]] 2023-07-16 23:15:01,719 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:01,720 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 23:15:01,720 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-16 23:15:01,720 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-16 23:15:01,720 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-16 23:15:01,720 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:01,720 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-16 23:15:01,720 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-16 23:15:01,726 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 23:15:01,728 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/info 2023-07-16 23:15:01,728 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/info 2023-07-16 23:15:01,728 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 23:15:01,750 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/info/ac65615b493b44108d3d175f03030d0e 2023-07-16 23:15:01,751 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:01,751 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 23:15:01,753 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/rep_barrier 2023-07-16 23:15:01,753 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/rep_barrier 2023-07-16 23:15:01,754 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 23:15:01,755 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:01,755 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 23:15:01,757 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/table 2023-07-16 23:15:01,757 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/table 2023-07-16 23:15:01,757 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 23:15:01,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-16 23:15:01,773 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/table/663480be65c24603b77db17a1d90ae01 2023-07-16 23:15:01,774 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:01,776 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740 2023-07-16 23:15:01,788 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740 2023-07-16 23:15:01,794 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 23:15:01,799 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 23:15:01,801 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=19; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11905393280, jitterRate=0.10877615213394165}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 23:15:01,802 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 23:15:01,804 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=16, masterSystemTime=1689549301668 2023-07-16 23:15:01,809 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-16 23:15:01,811 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-16 23:15:01,811 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43561,1689549300217, state=OPEN 2023-07-16 23:15:01,813 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 23:15:01,813 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 23:15:01,815 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=898ed5e7258b3e0527188384fae4bfe2, regionState=CLOSED 2023-07-16 23:15:01,816 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689549301815"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549301815"}]},"ts":"1689549301815"} 2023-07-16 23:15:01,817 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38989] ipc.CallRunner(144): callId: 40 service: ClientService methodName: Mutate size: 213 connection: 172.31.14.131:59138 deadline: 1689549361817, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43561 startCode=1689549300217. As of locationSeqNum=15. 2023-07-16 23:15:01,819 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-16 23:15:01,819 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43561,1689549300217 in 301 msec 2023-07-16 23:15:01,822 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 1.0690 sec 2023-07-16 23:15:01,919 DEBUG [PEWorker-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 23:15:01,921 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35158, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 23:15:01,928 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-16 23:15:01,929 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; CloseRegionProcedure 898ed5e7258b3e0527188384fae4bfe2, server=jenkins-hbase4.apache.org,38989,1689549296125 in 1.1590 sec 2023-07-16 23:15:01,930 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=898ed5e7258b3e0527188384fae4bfe2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43561,1689549300217; forceNewPlan=false, retain=false 2023-07-16 23:15:02,081 INFO [jenkins-hbase4:37359] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 23:15:02,081 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=898ed5e7258b3e0527188384fae4bfe2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:02,081 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689549302081"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549302081"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549302081"}]},"ts":"1689549302081"} 2023-07-16 23:15:02,085 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=12, state=RUNNABLE; OpenRegionProcedure 898ed5e7258b3e0527188384fae4bfe2, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:02,244 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. 2023-07-16 23:15:02,244 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 898ed5e7258b3e0527188384fae4bfe2, NAME => 'hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:02,244 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 23:15:02,245 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. service=MultiRowMutationService 2023-07-16 23:15:02,245 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-16 23:15:02,245 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 898ed5e7258b3e0527188384fae4bfe2 2023-07-16 23:15:02,245 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:02,245 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 898ed5e7258b3e0527188384fae4bfe2 2023-07-16 23:15:02,245 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 898ed5e7258b3e0527188384fae4bfe2 2023-07-16 23:15:02,247 INFO [StoreOpener-898ed5e7258b3e0527188384fae4bfe2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 898ed5e7258b3e0527188384fae4bfe2 2023-07-16 23:15:02,248 DEBUG [StoreOpener-898ed5e7258b3e0527188384fae4bfe2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2/m 2023-07-16 23:15:02,248 DEBUG [StoreOpener-898ed5e7258b3e0527188384fae4bfe2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2/m 2023-07-16 23:15:02,249 INFO [StoreOpener-898ed5e7258b3e0527188384fae4bfe2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 898ed5e7258b3e0527188384fae4bfe2 columnFamilyName m 2023-07-16 23:15:02,260 DEBUG [StoreOpener-898ed5e7258b3e0527188384fae4bfe2-1] regionserver.HStore(539): loaded hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2/m/9e26e015c74246d9a1c8fb189e42a1b1 2023-07-16 23:15:02,260 INFO [StoreOpener-898ed5e7258b3e0527188384fae4bfe2-1] regionserver.HStore(310): Store=898ed5e7258b3e0527188384fae4bfe2/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:02,261 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2 2023-07-16 23:15:02,264 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2 2023-07-16 23:15:02,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 898ed5e7258b3e0527188384fae4bfe2 2023-07-16 23:15:02,269 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 898ed5e7258b3e0527188384fae4bfe2; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@70920bd7, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:02,269 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 898ed5e7258b3e0527188384fae4bfe2: 2023-07-16 23:15:02,271 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2., pid=17, masterSystemTime=1689549302238 2023-07-16 23:15:02,274 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. 2023-07-16 23:15:02,274 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. 2023-07-16 23:15:02,275 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=898ed5e7258b3e0527188384fae4bfe2, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:02,275 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689549302274"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549302274"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549302274"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549302274"}]},"ts":"1689549302274"} 2023-07-16 23:15:02,282 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=12 2023-07-16 23:15:02,282 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; OpenRegionProcedure 898ed5e7258b3e0527188384fae4bfe2, server=jenkins-hbase4.apache.org,43561,1689549300217 in 193 msec 2023-07-16 23:15:02,284 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=898ed5e7258b3e0527188384fae4bfe2, REOPEN/MOVE in 1.5390 sec 2023-07-16 23:15:02,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33913,1689549296335, jenkins-hbase4.apache.org,38989,1689549296125] are moved back to default 2023-07-16 23:15:02,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:02,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:02,766 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38989] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:54064 deadline: 1689549362766, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43561 startCode=1689549300217. As of locationSeqNum=9. 2023-07-16 23:15:02,873 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38989] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:54064 deadline: 1689549362873, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43561 startCode=1689549300217. As of locationSeqNum=15. 2023-07-16 23:15:02,975 DEBUG [hconnection-0x2cf74ee0-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 23:15:02,980 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35162, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 23:15:03,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:03,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:03,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:03,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:03,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:03,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 23:15:03,027 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 23:15:03,029 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38989] ipc.CallRunner(144): callId: 45 service: ClientService methodName: ExecService size: 622 connection: 172.31.14.131:59138 deadline: 1689549363029, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43561 startCode=1689549300217. As of locationSeqNum=9. 2023-07-16 23:15:03,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 18 2023-07-16 23:15:03,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-16 23:15:03,137 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:03,139 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:03,140 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:03,140 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:03,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-16 23:15:03,149 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 23:15:03,156 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:03,156 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:03,157 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c empty. 2023-07-16 23:15:03,157 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:03,157 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:03,157 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78 empty. 2023-07-16 23:15:03,157 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:03,158 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:03,158 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85 empty. 2023-07-16 23:15:03,158 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b empty. 2023-07-16 23:15:03,158 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:03,159 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:03,159 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:03,159 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018 empty. 2023-07-16 23:15:03,162 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:03,162 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-16 23:15:03,196 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-16 23:15:03,198 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => f7bee6410187b0a2e8dceb2dba140a85, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:03,198 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 8dd09e672cc070e037d195f94a230f78, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:03,203 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 831c4dce87e9f77abca59e1627c2340c, NAME => 'Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:03,246 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:03,247 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing f7bee6410187b0a2e8dceb2dba140a85, disabling compactions & flushes 2023-07-16 23:15:03,247 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. 2023-07-16 23:15:03,247 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. 2023-07-16 23:15:03,247 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. after waiting 0 ms 2023-07-16 23:15:03,247 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. 2023-07-16 23:15:03,247 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. 2023-07-16 23:15:03,248 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for f7bee6410187b0a2e8dceb2dba140a85: 2023-07-16 23:15:03,248 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 1c57e7e850e711b509d82ee9ec3a570b, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:03,250 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:03,251 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 8dd09e672cc070e037d195f94a230f78, disabling compactions & flushes 2023-07-16 23:15:03,251 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. 2023-07-16 23:15:03,251 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. 2023-07-16 23:15:03,251 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. after waiting 0 ms 2023-07-16 23:15:03,251 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. 2023-07-16 23:15:03,251 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. 2023-07-16 23:15:03,251 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 8dd09e672cc070e037d195f94a230f78: 2023-07-16 23:15:03,252 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 1ea05c2d9222c69e0dee406374515018, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:03,252 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:03,252 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 831c4dce87e9f77abca59e1627c2340c, disabling compactions & flushes 2023-07-16 23:15:03,253 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. 2023-07-16 23:15:03,253 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. 2023-07-16 23:15:03,253 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. after waiting 0 ms 2023-07-16 23:15:03,253 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. 2023-07-16 23:15:03,253 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. 2023-07-16 23:15:03,253 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 831c4dce87e9f77abca59e1627c2340c: 2023-07-16 23:15:03,269 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:03,269 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 1c57e7e850e711b509d82ee9ec3a570b, disabling compactions & flushes 2023-07-16 23:15:03,269 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. 2023-07-16 23:15:03,269 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. 2023-07-16 23:15:03,269 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. after waiting 0 ms 2023-07-16 23:15:03,269 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. 2023-07-16 23:15:03,270 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. 2023-07-16 23:15:03,270 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 1c57e7e850e711b509d82ee9ec3a570b: 2023-07-16 23:15:03,270 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:03,270 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 1ea05c2d9222c69e0dee406374515018, disabling compactions & flushes 2023-07-16 23:15:03,270 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. 2023-07-16 23:15:03,270 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. 2023-07-16 23:15:03,270 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. after waiting 0 ms 2023-07-16 23:15:03,270 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. 2023-07-16 23:15:03,270 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. 2023-07-16 23:15:03,270 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 1ea05c2d9222c69e0dee406374515018: 2023-07-16 23:15:03,273 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 23:15:03,275 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549303274"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549303274"}]},"ts":"1689549303274"} 2023-07-16 23:15:03,275 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549303274"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549303274"}]},"ts":"1689549303274"} 2023-07-16 23:15:03,275 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549303274"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549303274"}]},"ts":"1689549303274"} 2023-07-16 23:15:03,275 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549303274"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549303274"}]},"ts":"1689549303274"} 2023-07-16 23:15:03,275 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549303274"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549303274"}]},"ts":"1689549303274"} 2023-07-16 23:15:03,321 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-16 23:15:03,322 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 23:15:03,323 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549303323"}]},"ts":"1689549303323"} 2023-07-16 23:15:03,325 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-16 23:15:03,329 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:03,329 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:03,329 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:03,329 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:03,330 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=831c4dce87e9f77abca59e1627c2340c, ASSIGN}, {pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8dd09e672cc070e037d195f94a230f78, ASSIGN}, {pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f7bee6410187b0a2e8dceb2dba140a85, ASSIGN}, {pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1c57e7e850e711b509d82ee9ec3a570b, ASSIGN}, {pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ea05c2d9222c69e0dee406374515018, ASSIGN}] 2023-07-16 23:15:03,333 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ea05c2d9222c69e0dee406374515018, ASSIGN 2023-07-16 23:15:03,333 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1c57e7e850e711b509d82ee9ec3a570b, ASSIGN 2023-07-16 23:15:03,334 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f7bee6410187b0a2e8dceb2dba140a85, ASSIGN 2023-07-16 23:15:03,334 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8dd09e672cc070e037d195f94a230f78, ASSIGN 2023-07-16 23:15:03,335 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=831c4dce87e9f77abca59e1627c2340c, ASSIGN 2023-07-16 23:15:03,336 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ea05c2d9222c69e0dee406374515018, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41683,1689549296507; forceNewPlan=false, retain=false 2023-07-16 23:15:03,336 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1c57e7e850e711b509d82ee9ec3a570b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43561,1689549300217; forceNewPlan=false, retain=false 2023-07-16 23:15:03,336 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f7bee6410187b0a2e8dceb2dba140a85, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41683,1689549296507; forceNewPlan=false, retain=false 2023-07-16 23:15:03,336 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8dd09e672cc070e037d195f94a230f78, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43561,1689549300217; forceNewPlan=false, retain=false 2023-07-16 23:15:03,341 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=831c4dce87e9f77abca59e1627c2340c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41683,1689549296507; forceNewPlan=false, retain=false 2023-07-16 23:15:03,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-16 23:15:03,487 INFO [jenkins-hbase4:37359] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-16 23:15:03,490 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=f7bee6410187b0a2e8dceb2dba140a85, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:03,490 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=831c4dce87e9f77abca59e1627c2340c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:03,490 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=8dd09e672cc070e037d195f94a230f78, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:03,490 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=1ea05c2d9222c69e0dee406374515018, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:03,490 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=1c57e7e850e711b509d82ee9ec3a570b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:03,490 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549303490"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549303490"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549303490"}]},"ts":"1689549303490"} 2023-07-16 23:15:03,491 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549303490"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549303490"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549303490"}]},"ts":"1689549303490"} 2023-07-16 23:15:03,490 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549303490"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549303490"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549303490"}]},"ts":"1689549303490"} 2023-07-16 23:15:03,491 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549303490"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549303490"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549303490"}]},"ts":"1689549303490"} 2023-07-16 23:15:03,491 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549303490"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549303490"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549303490"}]},"ts":"1689549303490"} 2023-07-16 23:15:03,493 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=19, state=RUNNABLE; OpenRegionProcedure 831c4dce87e9f77abca59e1627c2340c, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:03,498 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=22, state=RUNNABLE; OpenRegionProcedure 1c57e7e850e711b509d82ee9ec3a570b, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:03,498 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=26, ppid=21, state=RUNNABLE; OpenRegionProcedure f7bee6410187b0a2e8dceb2dba140a85, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:03,500 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=23, state=RUNNABLE; OpenRegionProcedure 1ea05c2d9222c69e0dee406374515018, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:03,501 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=20, state=RUNNABLE; OpenRegionProcedure 8dd09e672cc070e037d195f94a230f78, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:03,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-16 23:15:03,655 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. 2023-07-16 23:15:03,656 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1ea05c2d9222c69e0dee406374515018, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-16 23:15:03,656 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:03,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:03,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:03,657 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. 2023-07-16 23:15:03,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:03,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8dd09e672cc070e037d195f94a230f78, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-16 23:15:03,658 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:03,658 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:03,658 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:03,658 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:03,659 INFO [StoreOpener-1ea05c2d9222c69e0dee406374515018-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:03,663 DEBUG [StoreOpener-1ea05c2d9222c69e0dee406374515018-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018/f 2023-07-16 23:15:03,663 DEBUG [StoreOpener-1ea05c2d9222c69e0dee406374515018-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018/f 2023-07-16 23:15:03,663 INFO [StoreOpener-1ea05c2d9222c69e0dee406374515018-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1ea05c2d9222c69e0dee406374515018 columnFamilyName f 2023-07-16 23:15:03,664 INFO [StoreOpener-1ea05c2d9222c69e0dee406374515018-1] regionserver.HStore(310): Store=1ea05c2d9222c69e0dee406374515018/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:03,667 INFO [StoreOpener-8dd09e672cc070e037d195f94a230f78-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:03,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:03,670 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:03,670 DEBUG [StoreOpener-8dd09e672cc070e037d195f94a230f78-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78/f 2023-07-16 23:15:03,671 DEBUG [StoreOpener-8dd09e672cc070e037d195f94a230f78-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78/f 2023-07-16 23:15:03,671 INFO [StoreOpener-8dd09e672cc070e037d195f94a230f78-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8dd09e672cc070e037d195f94a230f78 columnFamilyName f 2023-07-16 23:15:03,672 INFO [StoreOpener-8dd09e672cc070e037d195f94a230f78-1] regionserver.HStore(310): Store=8dd09e672cc070e037d195f94a230f78/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:03,673 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:03,676 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:03,676 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:03,683 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:03,683 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:03,684 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1ea05c2d9222c69e0dee406374515018; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11358737440, jitterRate=0.05786485970020294}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:03,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1ea05c2d9222c69e0dee406374515018: 2023-07-16 23:15:03,686 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018., pid=27, masterSystemTime=1689549303648 2023-07-16 23:15:03,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:03,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. 2023-07-16 23:15:03,691 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. 2023-07-16 23:15:03,691 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. 2023-07-16 23:15:03,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f7bee6410187b0a2e8dceb2dba140a85, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-16 23:15:03,692 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8dd09e672cc070e037d195f94a230f78; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12077465120, jitterRate=0.12480159103870392}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:03,692 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:03,692 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:03,692 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8dd09e672cc070e037d195f94a230f78: 2023-07-16 23:15:03,692 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:03,692 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:03,692 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=1ea05c2d9222c69e0dee406374515018, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:03,693 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549303692"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549303692"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549303692"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549303692"}]},"ts":"1689549303692"} 2023-07-16 23:15:03,694 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78., pid=28, masterSystemTime=1689549303652 2023-07-16 23:15:03,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. 2023-07-16 23:15:03,697 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. 2023-07-16 23:15:03,697 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. 2023-07-16 23:15:03,698 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1c57e7e850e711b509d82ee9ec3a570b, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-16 23:15:03,698 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:03,698 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:03,698 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:03,698 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:03,699 INFO [StoreOpener-f7bee6410187b0a2e8dceb2dba140a85-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:03,700 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=8dd09e672cc070e037d195f94a230f78, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:03,701 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549303700"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549303700"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549303700"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549303700"}]},"ts":"1689549303700"} 2023-07-16 23:15:03,701 INFO [StoreOpener-1c57e7e850e711b509d82ee9ec3a570b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:03,704 DEBUG [StoreOpener-f7bee6410187b0a2e8dceb2dba140a85-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85/f 2023-07-16 23:15:03,705 DEBUG [StoreOpener-f7bee6410187b0a2e8dceb2dba140a85-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85/f 2023-07-16 23:15:03,706 DEBUG [StoreOpener-1c57e7e850e711b509d82ee9ec3a570b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b/f 2023-07-16 23:15:03,706 DEBUG [StoreOpener-1c57e7e850e711b509d82ee9ec3a570b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b/f 2023-07-16 23:15:03,707 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=23 2023-07-16 23:15:03,707 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=23, state=SUCCESS; OpenRegionProcedure 1ea05c2d9222c69e0dee406374515018, server=jenkins-hbase4.apache.org,41683,1689549296507 in 196 msec 2023-07-16 23:15:03,707 INFO [StoreOpener-f7bee6410187b0a2e8dceb2dba140a85-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f7bee6410187b0a2e8dceb2dba140a85 columnFamilyName f 2023-07-16 23:15:03,707 INFO [StoreOpener-1c57e7e850e711b509d82ee9ec3a570b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1c57e7e850e711b509d82ee9ec3a570b columnFamilyName f 2023-07-16 23:15:03,708 INFO [StoreOpener-f7bee6410187b0a2e8dceb2dba140a85-1] regionserver.HStore(310): Store=f7bee6410187b0a2e8dceb2dba140a85/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:03,709 INFO [StoreOpener-1c57e7e850e711b509d82ee9ec3a570b-1] regionserver.HStore(310): Store=1c57e7e850e711b509d82ee9ec3a570b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:03,712 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:03,712 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:03,712 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:03,713 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=20 2023-07-16 23:15:03,713 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ea05c2d9222c69e0dee406374515018, ASSIGN in 377 msec 2023-07-16 23:15:03,713 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=20, state=SUCCESS; OpenRegionProcedure 8dd09e672cc070e037d195f94a230f78, server=jenkins-hbase4.apache.org,43561,1689549300217 in 203 msec 2023-07-16 23:15:03,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:03,717 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8dd09e672cc070e037d195f94a230f78, ASSIGN in 383 msec 2023-07-16 23:15:03,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:03,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:03,727 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:03,728 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f7bee6410187b0a2e8dceb2dba140a85; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11850994400, jitterRate=0.1037098616361618}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:03,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f7bee6410187b0a2e8dceb2dba140a85: 2023-07-16 23:15:03,729 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85., pid=26, masterSystemTime=1689549303648 2023-07-16 23:15:03,732 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:03,733 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1c57e7e850e711b509d82ee9ec3a570b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10112216480, jitterRate=-0.05822645127773285}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:03,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1c57e7e850e711b509d82ee9ec3a570b: 2023-07-16 23:15:03,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. 2023-07-16 23:15:03,734 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. 2023-07-16 23:15:03,734 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. 2023-07-16 23:15:03,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 831c4dce87e9f77abca59e1627c2340c, NAME => 'Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-16 23:15:03,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:03,734 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b., pid=25, masterSystemTime=1689549303652 2023-07-16 23:15:03,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:03,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:03,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:03,736 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=f7bee6410187b0a2e8dceb2dba140a85, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:03,737 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549303736"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549303736"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549303736"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549303736"}]},"ts":"1689549303736"} 2023-07-16 23:15:03,737 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. 2023-07-16 23:15:03,737 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. 2023-07-16 23:15:03,743 INFO [StoreOpener-831c4dce87e9f77abca59e1627c2340c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:03,744 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=1c57e7e850e711b509d82ee9ec3a570b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:03,745 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549303744"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549303744"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549303744"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549303744"}]},"ts":"1689549303744"} 2023-07-16 23:15:03,747 DEBUG [StoreOpener-831c4dce87e9f77abca59e1627c2340c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c/f 2023-07-16 23:15:03,747 DEBUG [StoreOpener-831c4dce87e9f77abca59e1627c2340c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c/f 2023-07-16 23:15:03,748 INFO [StoreOpener-831c4dce87e9f77abca59e1627c2340c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 831c4dce87e9f77abca59e1627c2340c columnFamilyName f 2023-07-16 23:15:03,749 INFO [StoreOpener-831c4dce87e9f77abca59e1627c2340c-1] regionserver.HStore(310): Store=831c4dce87e9f77abca59e1627c2340c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:03,749 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=21 2023-07-16 23:15:03,753 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=21, state=SUCCESS; OpenRegionProcedure f7bee6410187b0a2e8dceb2dba140a85, server=jenkins-hbase4.apache.org,41683,1689549296507 in 247 msec 2023-07-16 23:15:03,754 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:03,754 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:03,755 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=22 2023-07-16 23:15:03,755 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=22, state=SUCCESS; OpenRegionProcedure 1c57e7e850e711b509d82ee9ec3a570b, server=jenkins-hbase4.apache.org,43561,1689549300217 in 250 msec 2023-07-16 23:15:03,757 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f7bee6410187b0a2e8dceb2dba140a85, ASSIGN in 423 msec 2023-07-16 23:15:03,759 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1c57e7e850e711b509d82ee9ec3a570b, ASSIGN in 426 msec 2023-07-16 23:15:03,761 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:03,769 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:03,770 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 831c4dce87e9f77abca59e1627c2340c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9422839840, jitterRate=-0.12242965400218964}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:03,770 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 831c4dce87e9f77abca59e1627c2340c: 2023-07-16 23:15:03,771 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c., pid=24, masterSystemTime=1689549303648 2023-07-16 23:15:03,774 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. 2023-07-16 23:15:03,774 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. 2023-07-16 23:15:03,775 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=831c4dce87e9f77abca59e1627c2340c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:03,775 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549303775"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549303775"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549303775"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549303775"}]},"ts":"1689549303775"} 2023-07-16 23:15:03,783 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=19 2023-07-16 23:15:03,783 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=19, state=SUCCESS; OpenRegionProcedure 831c4dce87e9f77abca59e1627c2340c, server=jenkins-hbase4.apache.org,41683,1689549296507 in 287 msec 2023-07-16 23:15:03,786 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=18 2023-07-16 23:15:03,786 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=831c4dce87e9f77abca59e1627c2340c, ASSIGN in 453 msec 2023-07-16 23:15:03,787 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 23:15:03,788 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549303788"}]},"ts":"1689549303788"} 2023-07-16 23:15:03,790 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-16 23:15:03,795 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 23:15:03,797 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 772 msec 2023-07-16 23:15:04,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-16 23:15:04,152 INFO [Listener at localhost/40131] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 18 completed 2023-07-16 23:15:04,152 DEBUG [Listener at localhost/40131] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-16 23:15:04,153 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:04,154 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38989] ipc.CallRunner(144): callId: 49 service: ClientService methodName: Scan size: 95 connection: 172.31.14.131:59156 deadline: 1689549364154, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43561 startCode=1689549300217. As of locationSeqNum=15. 2023-07-16 23:15:04,257 DEBUG [hconnection-0x29a77039-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 23:15:04,261 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35174, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 23:15:04,270 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-16 23:15:04,271 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:04,271 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-16 23:15:04,271 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:04,276 DEBUG [Listener at localhost/40131] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 23:15:04,279 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34698, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 23:15:04,281 DEBUG [Listener at localhost/40131] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 23:15:04,284 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54072, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 23:15:04,284 DEBUG [Listener at localhost/40131] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 23:15:04,286 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44152, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 23:15:04,287 DEBUG [Listener at localhost/40131] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 23:15:04,288 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35180, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 23:15:04,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-16 23:15:04,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 23:15:04,302 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:04,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:04,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:04,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:04,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:04,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:04,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:04,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(345): Moving region 831c4dce87e9f77abca59e1627c2340c to RSGroup Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:04,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:04,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:04,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:04,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:15:04,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:04,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=831c4dce87e9f77abca59e1627c2340c, REOPEN/MOVE 2023-07-16 23:15:04,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(345): Moving region 8dd09e672cc070e037d195f94a230f78 to RSGroup Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:04,329 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=831c4dce87e9f77abca59e1627c2340c, REOPEN/MOVE 2023-07-16 23:15:04,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:04,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:04,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:04,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:15:04,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:04,332 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=831c4dce87e9f77abca59e1627c2340c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:04,332 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549304331"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549304331"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549304331"}]},"ts":"1689549304331"} 2023-07-16 23:15:04,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8dd09e672cc070e037d195f94a230f78, REOPEN/MOVE 2023-07-16 23:15:04,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(345): Moving region f7bee6410187b0a2e8dceb2dba140a85 to RSGroup Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:04,335 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8dd09e672cc070e037d195f94a230f78, REOPEN/MOVE 2023-07-16 23:15:04,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:04,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:04,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:04,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:15:04,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:04,336 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=8dd09e672cc070e037d195f94a230f78, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:04,337 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549304336"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549304336"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549304336"}]},"ts":"1689549304336"} 2023-07-16 23:15:04,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f7bee6410187b0a2e8dceb2dba140a85, REOPEN/MOVE 2023-07-16 23:15:04,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(345): Moving region 1c57e7e850e711b509d82ee9ec3a570b to RSGroup Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:04,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:04,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:04,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:04,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:15:04,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:04,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1c57e7e850e711b509d82ee9ec3a570b, REOPEN/MOVE 2023-07-16 23:15:04,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(345): Moving region 1ea05c2d9222c69e0dee406374515018 to RSGroup Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:04,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:04,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:04,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:04,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:15:04,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:04,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ea05c2d9222c69e0dee406374515018, REOPEN/MOVE 2023-07-16 23:15:04,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_1620563459, current retry=0 2023-07-16 23:15:04,343 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f7bee6410187b0a2e8dceb2dba140a85, REOPEN/MOVE 2023-07-16 23:15:04,343 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1c57e7e850e711b509d82ee9ec3a570b, REOPEN/MOVE 2023-07-16 23:15:04,343 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=29, state=RUNNABLE; CloseRegionProcedure 831c4dce87e9f77abca59e1627c2340c, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:04,344 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ea05c2d9222c69e0dee406374515018, REOPEN/MOVE 2023-07-16 23:15:04,345 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=30, state=RUNNABLE; CloseRegionProcedure 8dd09e672cc070e037d195f94a230f78, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:04,345 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=f7bee6410187b0a2e8dceb2dba140a85, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:04,345 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549304345"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549304345"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549304345"}]},"ts":"1689549304345"} 2023-07-16 23:15:04,346 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=1c57e7e850e711b509d82ee9ec3a570b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:04,346 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549304346"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549304346"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549304346"}]},"ts":"1689549304346"} 2023-07-16 23:15:04,347 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=1ea05c2d9222c69e0dee406374515018, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:04,347 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549304347"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549304347"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549304347"}]},"ts":"1689549304347"} 2023-07-16 23:15:04,348 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=31, state=RUNNABLE; CloseRegionProcedure f7bee6410187b0a2e8dceb2dba140a85, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:04,349 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=32, state=RUNNABLE; CloseRegionProcedure 1c57e7e850e711b509d82ee9ec3a570b, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:04,350 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=33, state=RUNNABLE; CloseRegionProcedure 1ea05c2d9222c69e0dee406374515018, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:04,413 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-16 23:15:04,486 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-16 23:15:04,487 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testTableMoveTruncateAndDrop' 2023-07-16 23:15:04,489 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 23:15:04,489 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-16 23:15:04,490 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 23:15:04,490 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-16 23:15:04,490 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 23:15:04,490 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-16 23:15:04,503 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:04,508 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1ea05c2d9222c69e0dee406374515018, disabling compactions & flushes 2023-07-16 23:15:04,508 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. 2023-07-16 23:15:04,508 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. 2023-07-16 23:15:04,508 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. after waiting 0 ms 2023-07-16 23:15:04,508 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:04,508 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. 2023-07-16 23:15:04,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1c57e7e850e711b509d82ee9ec3a570b, disabling compactions & flushes 2023-07-16 23:15:04,509 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. 2023-07-16 23:15:04,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. 2023-07-16 23:15:04,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. after waiting 0 ms 2023-07-16 23:15:04,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. 2023-07-16 23:15:04,519 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:04,522 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. 2023-07-16 23:15:04,522 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1c57e7e850e711b509d82ee9ec3a570b: 2023-07-16 23:15:04,522 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1c57e7e850e711b509d82ee9ec3a570b move to jenkins-hbase4.apache.org,33913,1689549296335 record at close sequenceid=2 2023-07-16 23:15:04,530 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:04,530 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:04,531 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8dd09e672cc070e037d195f94a230f78, disabling compactions & flushes 2023-07-16 23:15:04,531 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. 2023-07-16 23:15:04,531 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. 2023-07-16 23:15:04,531 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. after waiting 0 ms 2023-07-16 23:15:04,531 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. 2023-07-16 23:15:04,535 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=1c57e7e850e711b509d82ee9ec3a570b, regionState=CLOSED 2023-07-16 23:15:04,535 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549304535"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549304535"}]},"ts":"1689549304535"} 2023-07-16 23:15:04,545 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=32 2023-07-16 23:15:04,545 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=32, state=SUCCESS; CloseRegionProcedure 1c57e7e850e711b509d82ee9ec3a570b, server=jenkins-hbase4.apache.org,43561,1689549300217 in 191 msec 2023-07-16 23:15:04,552 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:04,552 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1c57e7e850e711b509d82ee9ec3a570b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33913,1689549296335; forceNewPlan=false, retain=false 2023-07-16 23:15:04,555 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. 2023-07-16 23:15:04,555 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1ea05c2d9222c69e0dee406374515018: 2023-07-16 23:15:04,555 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1ea05c2d9222c69e0dee406374515018 move to jenkins-hbase4.apache.org,33913,1689549296335 record at close sequenceid=2 2023-07-16 23:15:04,562 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:04,562 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:04,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f7bee6410187b0a2e8dceb2dba140a85, disabling compactions & flushes 2023-07-16 23:15:04,564 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. 2023-07-16 23:15:04,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. 2023-07-16 23:15:04,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. after waiting 0 ms 2023-07-16 23:15:04,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. 2023-07-16 23:15:04,565 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:04,568 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=1ea05c2d9222c69e0dee406374515018, regionState=CLOSED 2023-07-16 23:15:04,568 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549304568"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549304568"}]},"ts":"1689549304568"} 2023-07-16 23:15:04,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. 2023-07-16 23:15:04,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8dd09e672cc070e037d195f94a230f78: 2023-07-16 23:15:04,571 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 8dd09e672cc070e037d195f94a230f78 move to jenkins-hbase4.apache.org,38989,1689549296125 record at close sequenceid=2 2023-07-16 23:15:04,576 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:04,578 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=8dd09e672cc070e037d195f94a230f78, regionState=CLOSED 2023-07-16 23:15:04,578 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549304578"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549304578"}]},"ts":"1689549304578"} 2023-07-16 23:15:04,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:04,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. 2023-07-16 23:15:04,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f7bee6410187b0a2e8dceb2dba140a85: 2023-07-16 23:15:04,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f7bee6410187b0a2e8dceb2dba140a85 move to jenkins-hbase4.apache.org,38989,1689549296125 record at close sequenceid=2 2023-07-16 23:15:04,585 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=33 2023-07-16 23:15:04,585 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=33, state=SUCCESS; CloseRegionProcedure 1ea05c2d9222c69e0dee406374515018, server=jenkins-hbase4.apache.org,41683,1689549296507 in 227 msec 2023-07-16 23:15:04,586 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ea05c2d9222c69e0dee406374515018, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33913,1689549296335; forceNewPlan=false, retain=false 2023-07-16 23:15:04,588 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:04,588 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:04,589 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 831c4dce87e9f77abca59e1627c2340c, disabling compactions & flushes 2023-07-16 23:15:04,589 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. 2023-07-16 23:15:04,589 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. 2023-07-16 23:15:04,589 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. after waiting 0 ms 2023-07-16 23:15:04,589 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. 2023-07-16 23:15:04,591 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=30 2023-07-16 23:15:04,591 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=30, state=SUCCESS; CloseRegionProcedure 8dd09e672cc070e037d195f94a230f78, server=jenkins-hbase4.apache.org,43561,1689549300217 in 239 msec 2023-07-16 23:15:04,591 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=f7bee6410187b0a2e8dceb2dba140a85, regionState=CLOSED 2023-07-16 23:15:04,591 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549304591"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549304591"}]},"ts":"1689549304591"} 2023-07-16 23:15:04,599 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8dd09e672cc070e037d195f94a230f78, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38989,1689549296125; forceNewPlan=false, retain=false 2023-07-16 23:15:04,606 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:04,608 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=31 2023-07-16 23:15:04,609 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. 2023-07-16 23:15:04,609 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=31, state=SUCCESS; CloseRegionProcedure f7bee6410187b0a2e8dceb2dba140a85, server=jenkins-hbase4.apache.org,41683,1689549296507 in 252 msec 2023-07-16 23:15:04,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 831c4dce87e9f77abca59e1627c2340c: 2023-07-16 23:15:04,609 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 831c4dce87e9f77abca59e1627c2340c move to jenkins-hbase4.apache.org,38989,1689549296125 record at close sequenceid=2 2023-07-16 23:15:04,610 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f7bee6410187b0a2e8dceb2dba140a85, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38989,1689549296125; forceNewPlan=false, retain=false 2023-07-16 23:15:04,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:04,619 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=831c4dce87e9f77abca59e1627c2340c, regionState=CLOSED 2023-07-16 23:15:04,620 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549304619"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549304619"}]},"ts":"1689549304619"} 2023-07-16 23:15:04,626 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=29 2023-07-16 23:15:04,626 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=29, state=SUCCESS; CloseRegionProcedure 831c4dce87e9f77abca59e1627c2340c, server=jenkins-hbase4.apache.org,41683,1689549296507 in 280 msec 2023-07-16 23:15:04,628 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=831c4dce87e9f77abca59e1627c2340c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38989,1689549296125; forceNewPlan=false, retain=false 2023-07-16 23:15:04,704 INFO [jenkins-hbase4:37359] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-16 23:15:04,704 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=831c4dce87e9f77abca59e1627c2340c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:04,704 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=8dd09e672cc070e037d195f94a230f78, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:04,704 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=f7bee6410187b0a2e8dceb2dba140a85, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:04,704 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549304704"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549304704"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549304704"}]},"ts":"1689549304704"} 2023-07-16 23:15:04,704 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=1c57e7e850e711b509d82ee9ec3a570b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:04,704 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=1ea05c2d9222c69e0dee406374515018, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:04,705 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549304704"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549304704"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549304704"}]},"ts":"1689549304704"} 2023-07-16 23:15:04,705 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549304704"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549304704"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549304704"}]},"ts":"1689549304704"} 2023-07-16 23:15:04,705 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549304704"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549304704"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549304704"}]},"ts":"1689549304704"} 2023-07-16 23:15:04,705 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549304704"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549304704"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549304704"}]},"ts":"1689549304704"} 2023-07-16 23:15:04,708 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=29, state=RUNNABLE; OpenRegionProcedure 831c4dce87e9f77abca59e1627c2340c, server=jenkins-hbase4.apache.org,38989,1689549296125}] 2023-07-16 23:15:04,711 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=31, state=RUNNABLE; OpenRegionProcedure f7bee6410187b0a2e8dceb2dba140a85, server=jenkins-hbase4.apache.org,38989,1689549296125}] 2023-07-16 23:15:04,712 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=33, state=RUNNABLE; OpenRegionProcedure 1ea05c2d9222c69e0dee406374515018, server=jenkins-hbase4.apache.org,33913,1689549296335}] 2023-07-16 23:15:04,715 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=30, state=RUNNABLE; OpenRegionProcedure 8dd09e672cc070e037d195f94a230f78, server=jenkins-hbase4.apache.org,38989,1689549296125}] 2023-07-16 23:15:04,716 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=32, state=RUNNABLE; OpenRegionProcedure 1c57e7e850e711b509d82ee9ec3a570b, server=jenkins-hbase4.apache.org,33913,1689549296335}] 2023-07-16 23:15:04,869 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. 2023-07-16 23:15:04,869 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f7bee6410187b0a2e8dceb2dba140a85, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-16 23:15:04,870 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:04,870 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:04,870 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:04,870 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:04,870 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:04,870 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 23:15:04,872 INFO [StoreOpener-f7bee6410187b0a2e8dceb2dba140a85-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:04,882 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34706, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 23:15:04,882 DEBUG [StoreOpener-f7bee6410187b0a2e8dceb2dba140a85-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85/f 2023-07-16 23:15:04,882 DEBUG [StoreOpener-f7bee6410187b0a2e8dceb2dba140a85-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85/f 2023-07-16 23:15:04,883 INFO [StoreOpener-f7bee6410187b0a2e8dceb2dba140a85-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f7bee6410187b0a2e8dceb2dba140a85 columnFamilyName f 2023-07-16 23:15:04,883 INFO [StoreOpener-f7bee6410187b0a2e8dceb2dba140a85-1] regionserver.HStore(310): Store=f7bee6410187b0a2e8dceb2dba140a85/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:04,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:04,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:04,889 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. 2023-07-16 23:15:04,889 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1c57e7e850e711b509d82ee9ec3a570b, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-16 23:15:04,889 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:04,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:04,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:04,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:04,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:04,892 INFO [StoreOpener-1c57e7e850e711b509d82ee9ec3a570b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:04,892 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f7bee6410187b0a2e8dceb2dba140a85; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11704746560, jitterRate=0.09008947014808655}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:04,892 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f7bee6410187b0a2e8dceb2dba140a85: 2023-07-16 23:15:04,895 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85., pid=40, masterSystemTime=1689549304862 2023-07-16 23:15:04,896 DEBUG [StoreOpener-1c57e7e850e711b509d82ee9ec3a570b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b/f 2023-07-16 23:15:04,897 DEBUG [StoreOpener-1c57e7e850e711b509d82ee9ec3a570b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b/f 2023-07-16 23:15:04,897 INFO [StoreOpener-1c57e7e850e711b509d82ee9ec3a570b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1c57e7e850e711b509d82ee9ec3a570b columnFamilyName f 2023-07-16 23:15:04,899 INFO [StoreOpener-1c57e7e850e711b509d82ee9ec3a570b-1] regionserver.HStore(310): Store=1c57e7e850e711b509d82ee9ec3a570b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:04,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:04,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. 2023-07-16 23:15:04,903 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. 2023-07-16 23:15:04,903 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. 2023-07-16 23:15:04,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 831c4dce87e9f77abca59e1627c2340c, NAME => 'Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-16 23:15:04,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:04,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:04,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:04,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:04,904 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=f7bee6410187b0a2e8dceb2dba140a85, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:04,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:04,905 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549304904"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549304904"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549304904"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549304904"}]},"ts":"1689549304904"} 2023-07-16 23:15:04,907 INFO [StoreOpener-831c4dce87e9f77abca59e1627c2340c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:04,908 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:04,909 DEBUG [StoreOpener-831c4dce87e9f77abca59e1627c2340c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c/f 2023-07-16 23:15:04,909 DEBUG [StoreOpener-831c4dce87e9f77abca59e1627c2340c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c/f 2023-07-16 23:15:04,909 INFO [StoreOpener-831c4dce87e9f77abca59e1627c2340c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 831c4dce87e9f77abca59e1627c2340c columnFamilyName f 2023-07-16 23:15:04,910 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=31 2023-07-16 23:15:04,910 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=31, state=SUCCESS; OpenRegionProcedure f7bee6410187b0a2e8dceb2dba140a85, server=jenkins-hbase4.apache.org,38989,1689549296125 in 197 msec 2023-07-16 23:15:04,911 INFO [StoreOpener-831c4dce87e9f77abca59e1627c2340c-1] regionserver.HStore(310): Store=831c4dce87e9f77abca59e1627c2340c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:04,911 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1c57e7e850e711b509d82ee9ec3a570b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9652099520, jitterRate=-0.10107818245887756}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:04,911 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1c57e7e850e711b509d82ee9ec3a570b: 2023-07-16 23:15:04,917 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b., pid=43, masterSystemTime=1689549304870 2023-07-16 23:15:04,918 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=31, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f7bee6410187b0a2e8dceb2dba140a85, REOPEN/MOVE in 574 msec 2023-07-16 23:15:04,920 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:04,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. 2023-07-16 23:15:04,923 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. 2023-07-16 23:15:04,923 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. 2023-07-16 23:15:04,923 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1ea05c2d9222c69e0dee406374515018, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-16 23:15:04,923 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:04,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:04,924 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=1c57e7e850e711b509d82ee9ec3a570b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:04,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:04,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:04,924 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549304924"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549304924"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549304924"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549304924"}]},"ts":"1689549304924"} 2023-07-16 23:15:04,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:04,931 INFO [StoreOpener-1ea05c2d9222c69e0dee406374515018-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:04,936 DEBUG [StoreOpener-1ea05c2d9222c69e0dee406374515018-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018/f 2023-07-16 23:15:04,936 DEBUG [StoreOpener-1ea05c2d9222c69e0dee406374515018-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018/f 2023-07-16 23:15:04,936 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:04,937 INFO [StoreOpener-1ea05c2d9222c69e0dee406374515018-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1ea05c2d9222c69e0dee406374515018 columnFamilyName f 2023-07-16 23:15:04,938 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=32 2023-07-16 23:15:04,938 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=32, state=SUCCESS; OpenRegionProcedure 1c57e7e850e711b509d82ee9ec3a570b, server=jenkins-hbase4.apache.org,33913,1689549296335 in 217 msec 2023-07-16 23:15:04,939 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 831c4dce87e9f77abca59e1627c2340c; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11926320640, jitterRate=0.11072516441345215}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:04,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 831c4dce87e9f77abca59e1627c2340c: 2023-07-16 23:15:04,940 INFO [StoreOpener-1ea05c2d9222c69e0dee406374515018-1] regionserver.HStore(310): Store=1ea05c2d9222c69e0dee406374515018/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:04,941 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c., pid=39, masterSystemTime=1689549304862 2023-07-16 23:15:04,941 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1c57e7e850e711b509d82ee9ec3a570b, REOPEN/MOVE in 600 msec 2023-07-16 23:15:04,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:04,943 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. 2023-07-16 23:15:04,943 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:04,943 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. 2023-07-16 23:15:04,944 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. 2023-07-16 23:15:04,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8dd09e672cc070e037d195f94a230f78, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-16 23:15:04,944 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=831c4dce87e9f77abca59e1627c2340c, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:04,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:04,944 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549304944"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549304944"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549304944"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549304944"}]},"ts":"1689549304944"} 2023-07-16 23:15:04,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:04,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:04,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:04,946 INFO [StoreOpener-8dd09e672cc070e037d195f94a230f78-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:04,947 DEBUG [StoreOpener-8dd09e672cc070e037d195f94a230f78-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78/f 2023-07-16 23:15:04,947 DEBUG [StoreOpener-8dd09e672cc070e037d195f94a230f78-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78/f 2023-07-16 23:15:04,948 INFO [StoreOpener-8dd09e672cc070e037d195f94a230f78-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8dd09e672cc070e037d195f94a230f78 columnFamilyName f 2023-07-16 23:15:04,948 INFO [StoreOpener-8dd09e672cc070e037d195f94a230f78-1] regionserver.HStore(310): Store=8dd09e672cc070e037d195f94a230f78/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:04,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:04,949 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=29 2023-07-16 23:15:04,949 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=29, state=SUCCESS; OpenRegionProcedure 831c4dce87e9f77abca59e1627c2340c, server=jenkins-hbase4.apache.org,38989,1689549296125 in 238 msec 2023-07-16 23:15:04,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:04,951 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=831c4dce87e9f77abca59e1627c2340c, REOPEN/MOVE in 622 msec 2023-07-16 23:15:04,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:04,952 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1ea05c2d9222c69e0dee406374515018; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10572298080, jitterRate=-0.015378013253211975}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:04,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1ea05c2d9222c69e0dee406374515018: 2023-07-16 23:15:04,953 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018., pid=41, masterSystemTime=1689549304870 2023-07-16 23:15:04,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. 2023-07-16 23:15:04,955 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. 2023-07-16 23:15:04,956 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=1ea05c2d9222c69e0dee406374515018, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:04,956 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549304956"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549304956"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549304956"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549304956"}]},"ts":"1689549304956"} 2023-07-16 23:15:04,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:04,958 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8dd09e672cc070e037d195f94a230f78; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10630479520, jitterRate=-0.009959444403648376}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:04,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8dd09e672cc070e037d195f94a230f78: 2023-07-16 23:15:04,959 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78., pid=42, masterSystemTime=1689549304862 2023-07-16 23:15:04,961 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=33 2023-07-16 23:15:04,961 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=33, state=SUCCESS; OpenRegionProcedure 1ea05c2d9222c69e0dee406374515018, server=jenkins-hbase4.apache.org,33913,1689549296335 in 246 msec 2023-07-16 23:15:04,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. 2023-07-16 23:15:04,962 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. 2023-07-16 23:15:04,963 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=8dd09e672cc070e037d195f94a230f78, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:04,963 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549304962"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549304962"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549304962"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549304962"}]},"ts":"1689549304962"} 2023-07-16 23:15:04,964 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ea05c2d9222c69e0dee406374515018, REOPEN/MOVE in 621 msec 2023-07-16 23:15:04,967 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=30 2023-07-16 23:15:04,967 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=30, state=SUCCESS; OpenRegionProcedure 8dd09e672cc070e037d195f94a230f78, server=jenkins-hbase4.apache.org,38989,1689549296125 in 250 msec 2023-07-16 23:15:04,969 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8dd09e672cc070e037d195f94a230f78, REOPEN/MOVE in 637 msec 2023-07-16 23:15:05,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure.ProcedureSyncWait(216): waitFor pid=29 2023-07-16 23:15:05,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_1620563459. 2023-07-16 23:15:05,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:05,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:05,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:05,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-16 23:15:05,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 23:15:05,351 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:05,357 INFO [Listener at localhost/40131] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-16 23:15:05,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-16 23:15:05,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=44, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 23:15:05,376 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549305376"}]},"ts":"1689549305376"} 2023-07-16 23:15:05,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-16 23:15:05,378 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-16 23:15:05,380 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-16 23:15:05,384 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=831c4dce87e9f77abca59e1627c2340c, UNASSIGN}, {pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8dd09e672cc070e037d195f94a230f78, UNASSIGN}, {pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f7bee6410187b0a2e8dceb2dba140a85, UNASSIGN}, {pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1c57e7e850e711b509d82ee9ec3a570b, UNASSIGN}, {pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ea05c2d9222c69e0dee406374515018, UNASSIGN}] 2023-07-16 23:15:05,386 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8dd09e672cc070e037d195f94a230f78, UNASSIGN 2023-07-16 23:15:05,387 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1c57e7e850e711b509d82ee9ec3a570b, UNASSIGN 2023-07-16 23:15:05,387 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ea05c2d9222c69e0dee406374515018, UNASSIGN 2023-07-16 23:15:05,387 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f7bee6410187b0a2e8dceb2dba140a85, UNASSIGN 2023-07-16 23:15:05,387 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=831c4dce87e9f77abca59e1627c2340c, UNASSIGN 2023-07-16 23:15:05,388 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=8dd09e672cc070e037d195f94a230f78, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:05,388 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=f7bee6410187b0a2e8dceb2dba140a85, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:05,388 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549305388"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549305388"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549305388"}]},"ts":"1689549305388"} 2023-07-16 23:15:05,388 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=1c57e7e850e711b509d82ee9ec3a570b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:05,388 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=1ea05c2d9222c69e0dee406374515018, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:05,388 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549305388"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549305388"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549305388"}]},"ts":"1689549305388"} 2023-07-16 23:15:05,388 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549305388"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549305388"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549305388"}]},"ts":"1689549305388"} 2023-07-16 23:15:05,388 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=831c4dce87e9f77abca59e1627c2340c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:05,389 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549305388"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549305388"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549305388"}]},"ts":"1689549305388"} 2023-07-16 23:15:05,388 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549305388"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549305388"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549305388"}]},"ts":"1689549305388"} 2023-07-16 23:15:05,390 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=46, state=RUNNABLE; CloseRegionProcedure 8dd09e672cc070e037d195f94a230f78, server=jenkins-hbase4.apache.org,38989,1689549296125}] 2023-07-16 23:15:05,391 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=47, state=RUNNABLE; CloseRegionProcedure f7bee6410187b0a2e8dceb2dba140a85, server=jenkins-hbase4.apache.org,38989,1689549296125}] 2023-07-16 23:15:05,393 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=52, ppid=49, state=RUNNABLE; CloseRegionProcedure 1ea05c2d9222c69e0dee406374515018, server=jenkins-hbase4.apache.org,33913,1689549296335}] 2023-07-16 23:15:05,394 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=45, state=RUNNABLE; CloseRegionProcedure 831c4dce87e9f77abca59e1627c2340c, server=jenkins-hbase4.apache.org,38989,1689549296125}] 2023-07-16 23:15:05,395 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=48, state=RUNNABLE; CloseRegionProcedure 1c57e7e850e711b509d82ee9ec3a570b, server=jenkins-hbase4.apache.org,33913,1689549296335}] 2023-07-16 23:15:05,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-16 23:15:05,544 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:05,545 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8dd09e672cc070e037d195f94a230f78, disabling compactions & flushes 2023-07-16 23:15:05,545 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. 2023-07-16 23:15:05,545 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. 2023-07-16 23:15:05,546 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. after waiting 0 ms 2023-07-16 23:15:05,546 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. 2023-07-16 23:15:05,546 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:05,547 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1ea05c2d9222c69e0dee406374515018, disabling compactions & flushes 2023-07-16 23:15:05,547 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. 2023-07-16 23:15:05,547 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. 2023-07-16 23:15:05,547 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. after waiting 0 ms 2023-07-16 23:15:05,547 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. 2023-07-16 23:15:05,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 23:15:05,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 23:15:05,554 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78. 2023-07-16 23:15:05,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8dd09e672cc070e037d195f94a230f78: 2023-07-16 23:15:05,554 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018. 2023-07-16 23:15:05,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1ea05c2d9222c69e0dee406374515018: 2023-07-16 23:15:05,556 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:05,556 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:05,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 831c4dce87e9f77abca59e1627c2340c, disabling compactions & flushes 2023-07-16 23:15:05,557 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. 2023-07-16 23:15:05,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. 2023-07-16 23:15:05,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. after waiting 0 ms 2023-07-16 23:15:05,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. 2023-07-16 23:15:05,558 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=8dd09e672cc070e037d195f94a230f78, regionState=CLOSED 2023-07-16 23:15:05,558 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549305558"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549305558"}]},"ts":"1689549305558"} 2023-07-16 23:15:05,559 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:05,559 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:05,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1c57e7e850e711b509d82ee9ec3a570b, disabling compactions & flushes 2023-07-16 23:15:05,560 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. 2023-07-16 23:15:05,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. 2023-07-16 23:15:05,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. after waiting 0 ms 2023-07-16 23:15:05,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. 2023-07-16 23:15:05,561 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=1ea05c2d9222c69e0dee406374515018, regionState=CLOSED 2023-07-16 23:15:05,561 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549305560"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549305560"}]},"ts":"1689549305560"} 2023-07-16 23:15:05,566 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=46 2023-07-16 23:15:05,566 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=46, state=SUCCESS; CloseRegionProcedure 8dd09e672cc070e037d195f94a230f78, server=jenkins-hbase4.apache.org,38989,1689549296125 in 172 msec 2023-07-16 23:15:05,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 23:15:05,567 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=52, resume processing ppid=49 2023-07-16 23:15:05,567 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c. 2023-07-16 23:15:05,567 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=49, state=SUCCESS; CloseRegionProcedure 1ea05c2d9222c69e0dee406374515018, server=jenkins-hbase4.apache.org,33913,1689549296335 in 170 msec 2023-07-16 23:15:05,567 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 831c4dce87e9f77abca59e1627c2340c: 2023-07-16 23:15:05,571 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8dd09e672cc070e037d195f94a230f78, UNASSIGN in 184 msec 2023-07-16 23:15:05,571 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ea05c2d9222c69e0dee406374515018, UNASSIGN in 185 msec 2023-07-16 23:15:05,572 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=831c4dce87e9f77abca59e1627c2340c, regionState=CLOSED 2023-07-16 23:15:05,572 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549305572"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549305572"}]},"ts":"1689549305572"} 2023-07-16 23:15:05,576 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=45 2023-07-16 23:15:05,576 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=45, state=SUCCESS; CloseRegionProcedure 831c4dce87e9f77abca59e1627c2340c, server=jenkins-hbase4.apache.org,38989,1689549296125 in 180 msec 2023-07-16 23:15:05,577 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:05,577 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:05,579 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f7bee6410187b0a2e8dceb2dba140a85, disabling compactions & flushes 2023-07-16 23:15:05,579 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. 2023-07-16 23:15:05,579 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. 2023-07-16 23:15:05,579 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. after waiting 0 ms 2023-07-16 23:15:05,580 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. 2023-07-16 23:15:05,582 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=831c4dce87e9f77abca59e1627c2340c, UNASSIGN in 194 msec 2023-07-16 23:15:05,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 23:15:05,593 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b. 2023-07-16 23:15:05,593 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1c57e7e850e711b509d82ee9ec3a570b: 2023-07-16 23:15:05,595 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 23:15:05,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:05,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85. 2023-07-16 23:15:05,596 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f7bee6410187b0a2e8dceb2dba140a85: 2023-07-16 23:15:05,596 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=1c57e7e850e711b509d82ee9ec3a570b, regionState=CLOSED 2023-07-16 23:15:05,597 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549305596"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549305596"}]},"ts":"1689549305596"} 2023-07-16 23:15:05,599 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:05,600 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=f7bee6410187b0a2e8dceb2dba140a85, regionState=CLOSED 2023-07-16 23:15:05,600 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549305600"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549305600"}]},"ts":"1689549305600"} 2023-07-16 23:15:05,604 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=48 2023-07-16 23:15:05,604 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=48, state=SUCCESS; CloseRegionProcedure 1c57e7e850e711b509d82ee9ec3a570b, server=jenkins-hbase4.apache.org,33913,1689549296335 in 204 msec 2023-07-16 23:15:05,607 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1c57e7e850e711b509d82ee9ec3a570b, UNASSIGN in 222 msec 2023-07-16 23:15:05,608 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=47 2023-07-16 23:15:05,608 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=47, state=SUCCESS; CloseRegionProcedure f7bee6410187b0a2e8dceb2dba140a85, server=jenkins-hbase4.apache.org,38989,1689549296125 in 213 msec 2023-07-16 23:15:05,611 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=44 2023-07-16 23:15:05,611 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f7bee6410187b0a2e8dceb2dba140a85, UNASSIGN in 226 msec 2023-07-16 23:15:05,616 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549305615"}]},"ts":"1689549305615"} 2023-07-16 23:15:05,618 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-16 23:15:05,619 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-16 23:15:05,623 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=44, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 256 msec 2023-07-16 23:15:05,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-16 23:15:05,682 INFO [Listener at localhost/40131] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 44 completed 2023-07-16 23:15:05,683 INFO [Listener at localhost/40131] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-16 23:15:05,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-16 23:15:05,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=55, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-16 23:15:05,701 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-16 23:15:05,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-16 23:15:05,721 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:05,721 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:05,721 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:05,721 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:05,721 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:05,727 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b/f, FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b/recovered.edits] 2023-07-16 23:15:05,727 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018/f, FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018/recovered.edits] 2023-07-16 23:15:05,727 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c/f, FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c/recovered.edits] 2023-07-16 23:15:05,727 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78/f, FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78/recovered.edits] 2023-07-16 23:15:05,727 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85/f, FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85/recovered.edits] 2023-07-16 23:15:05,747 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018/recovered.edits/7.seqid to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/archive/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018/recovered.edits/7.seqid 2023-07-16 23:15:05,748 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c/recovered.edits/7.seqid to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/archive/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c/recovered.edits/7.seqid 2023-07-16 23:15:05,748 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78/recovered.edits/7.seqid to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/archive/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78/recovered.edits/7.seqid 2023-07-16 23:15:05,752 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b/recovered.edits/7.seqid to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/archive/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b/recovered.edits/7.seqid 2023-07-16 23:15:05,752 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1ea05c2d9222c69e0dee406374515018 2023-07-16 23:15:05,752 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8dd09e672cc070e037d195f94a230f78 2023-07-16 23:15:05,753 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/831c4dce87e9f77abca59e1627c2340c 2023-07-16 23:15:05,753 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1c57e7e850e711b509d82ee9ec3a570b 2023-07-16 23:15:05,754 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85/recovered.edits/7.seqid to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/archive/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85/recovered.edits/7.seqid 2023-07-16 23:15:05,755 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f7bee6410187b0a2e8dceb2dba140a85 2023-07-16 23:15:05,755 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-16 23:15:05,789 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-16 23:15:05,794 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-16 23:15:05,795 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-16 23:15:05,795 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549305795"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:05,795 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549305795"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:05,795 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549305795"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:05,795 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549305795"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:05,795 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549305795"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:05,799 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-16 23:15:05,799 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 831c4dce87e9f77abca59e1627c2340c, NAME => 'Group_testTableMoveTruncateAndDrop,,1689549303020.831c4dce87e9f77abca59e1627c2340c.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 8dd09e672cc070e037d195f94a230f78, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689549303020.8dd09e672cc070e037d195f94a230f78.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => f7bee6410187b0a2e8dceb2dba140a85, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549303020.f7bee6410187b0a2e8dceb2dba140a85.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 1c57e7e850e711b509d82ee9ec3a570b, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549303020.1c57e7e850e711b509d82ee9ec3a570b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 1ea05c2d9222c69e0dee406374515018, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689549303020.1ea05c2d9222c69e0dee406374515018.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-16 23:15:05,799 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-16 23:15:05,799 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689549305799"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:05,802 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-16 23:15:05,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-16 23:15:05,814 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/215bfba233a6e0e261ee96a214bb7976 2023-07-16 23:15:05,814 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35dd369d62dceb76d38ad3136f60206c 2023-07-16 23:15:05,814 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/66783cc6591fbacf71c14af590e3317e 2023-07-16 23:15:05,814 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/80ba2f12d1a6f6d9c893686c46e53bbe 2023-07-16 23:15:05,814 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6531f96d53025925be9f24cc17c810ef 2023-07-16 23:15:05,815 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/80ba2f12d1a6f6d9c893686c46e53bbe empty. 2023-07-16 23:15:05,816 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/215bfba233a6e0e261ee96a214bb7976 empty. 2023-07-16 23:15:05,816 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/66783cc6591fbacf71c14af590e3317e empty. 2023-07-16 23:15:05,816 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35dd369d62dceb76d38ad3136f60206c empty. 2023-07-16 23:15:05,816 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6531f96d53025925be9f24cc17c810ef empty. 2023-07-16 23:15:05,817 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/66783cc6591fbacf71c14af590e3317e 2023-07-16 23:15:05,817 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6531f96d53025925be9f24cc17c810ef 2023-07-16 23:15:05,817 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35dd369d62dceb76d38ad3136f60206c 2023-07-16 23:15:05,818 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/215bfba233a6e0e261ee96a214bb7976 2023-07-16 23:15:05,818 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/80ba2f12d1a6f6d9c893686c46e53bbe 2023-07-16 23:15:05,818 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-16 23:15:05,852 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-16 23:15:05,854 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 215bfba233a6e0e261ee96a214bb7976, NAME => 'Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:05,854 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 6531f96d53025925be9f24cc17c810ef, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:05,855 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 80ba2f12d1a6f6d9c893686c46e53bbe, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:05,917 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:05,917 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 6531f96d53025925be9f24cc17c810ef, disabling compactions & flushes 2023-07-16 23:15:05,917 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef. 2023-07-16 23:15:05,917 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef. 2023-07-16 23:15:05,917 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef. after waiting 0 ms 2023-07-16 23:15:05,917 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef. 2023-07-16 23:15:05,917 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef. 2023-07-16 23:15:05,917 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 6531f96d53025925be9f24cc17c810ef: 2023-07-16 23:15:05,918 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 66783cc6591fbacf71c14af590e3317e, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:05,921 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:05,921 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 215bfba233a6e0e261ee96a214bb7976, disabling compactions & flushes 2023-07-16 23:15:05,921 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:05,922 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 80ba2f12d1a6f6d9c893686c46e53bbe, disabling compactions & flushes 2023-07-16 23:15:05,922 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe. 2023-07-16 23:15:05,922 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe. 2023-07-16 23:15:05,922 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe. after waiting 0 ms 2023-07-16 23:15:05,922 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe. 2023-07-16 23:15:05,921 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976. 2023-07-16 23:15:05,922 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976. 2023-07-16 23:15:05,922 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976. after waiting 0 ms 2023-07-16 23:15:05,922 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976. 2023-07-16 23:15:05,922 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976. 2023-07-16 23:15:05,922 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 215bfba233a6e0e261ee96a214bb7976: 2023-07-16 23:15:05,922 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe. 2023-07-16 23:15:05,924 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 80ba2f12d1a6f6d9c893686c46e53bbe: 2023-07-16 23:15:05,924 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 35dd369d62dceb76d38ad3136f60206c, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:05,943 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:05,943 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 66783cc6591fbacf71c14af590e3317e, disabling compactions & flushes 2023-07-16 23:15:05,943 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e. 2023-07-16 23:15:05,943 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e. 2023-07-16 23:15:05,943 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e. after waiting 0 ms 2023-07-16 23:15:05,943 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e. 2023-07-16 23:15:05,944 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e. 2023-07-16 23:15:05,944 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 66783cc6591fbacf71c14af590e3317e: 2023-07-16 23:15:05,954 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:05,954 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 35dd369d62dceb76d38ad3136f60206c, disabling compactions & flushes 2023-07-16 23:15:05,955 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c. 2023-07-16 23:15:05,955 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c. 2023-07-16 23:15:05,955 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c. after waiting 0 ms 2023-07-16 23:15:05,955 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c. 2023-07-16 23:15:05,955 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c. 2023-07-16 23:15:05,955 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 35dd369d62dceb76d38ad3136f60206c: 2023-07-16 23:15:05,960 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549305959"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549305959"}]},"ts":"1689549305959"} 2023-07-16 23:15:05,960 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549305959"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549305959"}]},"ts":"1689549305959"} 2023-07-16 23:15:05,960 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549305959"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549305959"}]},"ts":"1689549305959"} 2023-07-16 23:15:05,960 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549305959"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549305959"}]},"ts":"1689549305959"} 2023-07-16 23:15:05,960 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549305959"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549305959"}]},"ts":"1689549305959"} 2023-07-16 23:15:05,964 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-16 23:15:05,965 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549305965"}]},"ts":"1689549305965"} 2023-07-16 23:15:05,967 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-16 23:15:05,973 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:05,973 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:05,973 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:05,973 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:05,973 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=215bfba233a6e0e261ee96a214bb7976, ASSIGN}, {pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6531f96d53025925be9f24cc17c810ef, ASSIGN}, {pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=80ba2f12d1a6f6d9c893686c46e53bbe, ASSIGN}, {pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=66783cc6591fbacf71c14af590e3317e, ASSIGN}, {pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35dd369d62dceb76d38ad3136f60206c, ASSIGN}] 2023-07-16 23:15:05,976 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6531f96d53025925be9f24cc17c810ef, ASSIGN 2023-07-16 23:15:05,976 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=215bfba233a6e0e261ee96a214bb7976, ASSIGN 2023-07-16 23:15:05,976 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=80ba2f12d1a6f6d9c893686c46e53bbe, ASSIGN 2023-07-16 23:15:05,977 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=66783cc6591fbacf71c14af590e3317e, ASSIGN 2023-07-16 23:15:05,977 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35dd369d62dceb76d38ad3136f60206c, ASSIGN 2023-07-16 23:15:05,977 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6531f96d53025925be9f24cc17c810ef, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33913,1689549296335; forceNewPlan=false, retain=false 2023-07-16 23:15:05,978 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=80ba2f12d1a6f6d9c893686c46e53bbe, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33913,1689549296335; forceNewPlan=false, retain=false 2023-07-16 23:15:05,978 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=215bfba233a6e0e261ee96a214bb7976, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38989,1689549296125; forceNewPlan=false, retain=false 2023-07-16 23:15:05,978 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=66783cc6591fbacf71c14af590e3317e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38989,1689549296125; forceNewPlan=false, retain=false 2023-07-16 23:15:05,979 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35dd369d62dceb76d38ad3136f60206c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33913,1689549296335; forceNewPlan=false, retain=false 2023-07-16 23:15:06,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-16 23:15:06,128 INFO [jenkins-hbase4:37359] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-16 23:15:06,131 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=35dd369d62dceb76d38ad3136f60206c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:06,131 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=80ba2f12d1a6f6d9c893686c46e53bbe, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:06,131 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=215bfba233a6e0e261ee96a214bb7976, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:06,131 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=66783cc6591fbacf71c14af590e3317e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:06,132 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549306131"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549306131"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549306131"}]},"ts":"1689549306131"} 2023-07-16 23:15:06,131 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=6531f96d53025925be9f24cc17c810ef, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:06,132 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549306131"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549306131"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549306131"}]},"ts":"1689549306131"} 2023-07-16 23:15:06,132 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549306131"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549306131"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549306131"}]},"ts":"1689549306131"} 2023-07-16 23:15:06,131 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549306131"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549306131"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549306131"}]},"ts":"1689549306131"} 2023-07-16 23:15:06,131 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549306131"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549306131"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549306131"}]},"ts":"1689549306131"} 2023-07-16 23:15:06,134 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=56, state=RUNNABLE; OpenRegionProcedure 215bfba233a6e0e261ee96a214bb7976, server=jenkins-hbase4.apache.org,38989,1689549296125}] 2023-07-16 23:15:06,135 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=57, state=RUNNABLE; OpenRegionProcedure 6531f96d53025925be9f24cc17c810ef, server=jenkins-hbase4.apache.org,33913,1689549296335}] 2023-07-16 23:15:06,137 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=63, ppid=59, state=RUNNABLE; OpenRegionProcedure 66783cc6591fbacf71c14af590e3317e, server=jenkins-hbase4.apache.org,38989,1689549296125}] 2023-07-16 23:15:06,138 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=58, state=RUNNABLE; OpenRegionProcedure 80ba2f12d1a6f6d9c893686c46e53bbe, server=jenkins-hbase4.apache.org,33913,1689549296335}] 2023-07-16 23:15:06,142 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=60, state=RUNNABLE; OpenRegionProcedure 35dd369d62dceb76d38ad3136f60206c, server=jenkins-hbase4.apache.org,33913,1689549296335}] 2023-07-16 23:15:06,294 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c. 2023-07-16 23:15:06,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 35dd369d62dceb76d38ad3136f60206c, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-16 23:15:06,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 35dd369d62dceb76d38ad3136f60206c 2023-07-16 23:15:06,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:06,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 35dd369d62dceb76d38ad3136f60206c 2023-07-16 23:15:06,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 35dd369d62dceb76d38ad3136f60206c 2023-07-16 23:15:06,296 INFO [StoreOpener-35dd369d62dceb76d38ad3136f60206c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 35dd369d62dceb76d38ad3136f60206c 2023-07-16 23:15:06,298 DEBUG [StoreOpener-35dd369d62dceb76d38ad3136f60206c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/35dd369d62dceb76d38ad3136f60206c/f 2023-07-16 23:15:06,298 DEBUG [StoreOpener-35dd369d62dceb76d38ad3136f60206c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/35dd369d62dceb76d38ad3136f60206c/f 2023-07-16 23:15:06,299 INFO [StoreOpener-35dd369d62dceb76d38ad3136f60206c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 35dd369d62dceb76d38ad3136f60206c columnFamilyName f 2023-07-16 23:15:06,300 INFO [StoreOpener-35dd369d62dceb76d38ad3136f60206c-1] regionserver.HStore(310): Store=35dd369d62dceb76d38ad3136f60206c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:06,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/35dd369d62dceb76d38ad3136f60206c 2023-07-16 23:15:06,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/35dd369d62dceb76d38ad3136f60206c 2023-07-16 23:15:06,303 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e. 2023-07-16 23:15:06,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 66783cc6591fbacf71c14af590e3317e, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-16 23:15:06,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 66783cc6591fbacf71c14af590e3317e 2023-07-16 23:15:06,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:06,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 66783cc6591fbacf71c14af590e3317e 2023-07-16 23:15:06,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 66783cc6591fbacf71c14af590e3317e 2023-07-16 23:15:06,305 INFO [StoreOpener-66783cc6591fbacf71c14af590e3317e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 66783cc6591fbacf71c14af590e3317e 2023-07-16 23:15:06,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-16 23:15:06,309 DEBUG [StoreOpener-66783cc6591fbacf71c14af590e3317e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/66783cc6591fbacf71c14af590e3317e/f 2023-07-16 23:15:06,309 DEBUG [StoreOpener-66783cc6591fbacf71c14af590e3317e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/66783cc6591fbacf71c14af590e3317e/f 2023-07-16 23:15:06,309 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 35dd369d62dceb76d38ad3136f60206c 2023-07-16 23:15:06,310 INFO [StoreOpener-66783cc6591fbacf71c14af590e3317e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 66783cc6591fbacf71c14af590e3317e columnFamilyName f 2023-07-16 23:15:06,311 INFO [StoreOpener-66783cc6591fbacf71c14af590e3317e-1] regionserver.HStore(310): Store=66783cc6591fbacf71c14af590e3317e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:06,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/66783cc6591fbacf71c14af590e3317e 2023-07-16 23:15:06,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/66783cc6591fbacf71c14af590e3317e 2023-07-16 23:15:06,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/35dd369d62dceb76d38ad3136f60206c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:06,313 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 35dd369d62dceb76d38ad3136f60206c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10985720960, jitterRate=0.023124992847442627}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:06,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 35dd369d62dceb76d38ad3136f60206c: 2023-07-16 23:15:06,314 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c., pid=65, masterSystemTime=1689549306289 2023-07-16 23:15:06,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 66783cc6591fbacf71c14af590e3317e 2023-07-16 23:15:06,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c. 2023-07-16 23:15:06,317 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c. 2023-07-16 23:15:06,317 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe. 2023-07-16 23:15:06,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 80ba2f12d1a6f6d9c893686c46e53bbe, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-16 23:15:06,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 80ba2f12d1a6f6d9c893686c46e53bbe 2023-07-16 23:15:06,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:06,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 80ba2f12d1a6f6d9c893686c46e53bbe 2023-07-16 23:15:06,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 80ba2f12d1a6f6d9c893686c46e53bbe 2023-07-16 23:15:06,320 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=35dd369d62dceb76d38ad3136f60206c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:06,320 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549306320"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549306320"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549306320"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549306320"}]},"ts":"1689549306320"} 2023-07-16 23:15:06,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/66783cc6591fbacf71c14af590e3317e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:06,324 INFO [StoreOpener-80ba2f12d1a6f6d9c893686c46e53bbe-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 80ba2f12d1a6f6d9c893686c46e53bbe 2023-07-16 23:15:06,326 DEBUG [StoreOpener-80ba2f12d1a6f6d9c893686c46e53bbe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/80ba2f12d1a6f6d9c893686c46e53bbe/f 2023-07-16 23:15:06,326 DEBUG [StoreOpener-80ba2f12d1a6f6d9c893686c46e53bbe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/80ba2f12d1a6f6d9c893686c46e53bbe/f 2023-07-16 23:15:06,327 INFO [StoreOpener-80ba2f12d1a6f6d9c893686c46e53bbe-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 80ba2f12d1a6f6d9c893686c46e53bbe columnFamilyName f 2023-07-16 23:15:06,327 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 66783cc6591fbacf71c14af590e3317e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11000184640, jitterRate=0.024472028017044067}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:06,327 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 66783cc6591fbacf71c14af590e3317e: 2023-07-16 23:15:06,329 INFO [StoreOpener-80ba2f12d1a6f6d9c893686c46e53bbe-1] regionserver.HStore(310): Store=80ba2f12d1a6f6d9c893686c46e53bbe/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:06,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/80ba2f12d1a6f6d9c893686c46e53bbe 2023-07-16 23:15:06,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/80ba2f12d1a6f6d9c893686c46e53bbe 2023-07-16 23:15:06,331 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e., pid=63, masterSystemTime=1689549306289 2023-07-16 23:15:06,334 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=60 2023-07-16 23:15:06,334 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=60, state=SUCCESS; OpenRegionProcedure 35dd369d62dceb76d38ad3136f60206c, server=jenkins-hbase4.apache.org,33913,1689549296335 in 184 msec 2023-07-16 23:15:06,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e. 2023-07-16 23:15:06,334 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e. 2023-07-16 23:15:06,334 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976. 2023-07-16 23:15:06,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 215bfba233a6e0e261ee96a214bb7976, NAME => 'Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-16 23:15:06,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 215bfba233a6e0e261ee96a214bb7976 2023-07-16 23:15:06,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:06,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 215bfba233a6e0e261ee96a214bb7976 2023-07-16 23:15:06,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 215bfba233a6e0e261ee96a214bb7976 2023-07-16 23:15:06,336 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=66783cc6591fbacf71c14af590e3317e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:06,336 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549306336"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549306336"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549306336"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549306336"}]},"ts":"1689549306336"} 2023-07-16 23:15:06,338 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35dd369d62dceb76d38ad3136f60206c, ASSIGN in 361 msec 2023-07-16 23:15:06,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 80ba2f12d1a6f6d9c893686c46e53bbe 2023-07-16 23:15:06,339 INFO [StoreOpener-215bfba233a6e0e261ee96a214bb7976-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 215bfba233a6e0e261ee96a214bb7976 2023-07-16 23:15:06,342 DEBUG [StoreOpener-215bfba233a6e0e261ee96a214bb7976-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/215bfba233a6e0e261ee96a214bb7976/f 2023-07-16 23:15:06,342 DEBUG [StoreOpener-215bfba233a6e0e261ee96a214bb7976-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/215bfba233a6e0e261ee96a214bb7976/f 2023-07-16 23:15:06,343 INFO [StoreOpener-215bfba233a6e0e261ee96a214bb7976-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 215bfba233a6e0e261ee96a214bb7976 columnFamilyName f 2023-07-16 23:15:06,343 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=63, resume processing ppid=59 2023-07-16 23:15:06,343 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=59, state=SUCCESS; OpenRegionProcedure 66783cc6591fbacf71c14af590e3317e, server=jenkins-hbase4.apache.org,38989,1689549296125 in 203 msec 2023-07-16 23:15:06,344 INFO [StoreOpener-215bfba233a6e0e261ee96a214bb7976-1] regionserver.HStore(310): Store=215bfba233a6e0e261ee96a214bb7976/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:06,345 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=66783cc6591fbacf71c14af590e3317e, ASSIGN in 370 msec 2023-07-16 23:15:06,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/80ba2f12d1a6f6d9c893686c46e53bbe/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:06,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/215bfba233a6e0e261ee96a214bb7976 2023-07-16 23:15:06,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/215bfba233a6e0e261ee96a214bb7976 2023-07-16 23:15:06,356 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 80ba2f12d1a6f6d9c893686c46e53bbe; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10993603840, jitterRate=0.023859143257141113}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:06,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 80ba2f12d1a6f6d9c893686c46e53bbe: 2023-07-16 23:15:06,358 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe., pid=64, masterSystemTime=1689549306289 2023-07-16 23:15:06,360 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe. 2023-07-16 23:15:06,360 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe. 2023-07-16 23:15:06,360 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef. 2023-07-16 23:15:06,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6531f96d53025925be9f24cc17c810ef, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-16 23:15:06,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 6531f96d53025925be9f24cc17c810ef 2023-07-16 23:15:06,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:06,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6531f96d53025925be9f24cc17c810ef 2023-07-16 23:15:06,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6531f96d53025925be9f24cc17c810ef 2023-07-16 23:15:06,362 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=80ba2f12d1a6f6d9c893686c46e53bbe, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:06,363 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549306362"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549306362"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549306362"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549306362"}]},"ts":"1689549306362"} 2023-07-16 23:15:06,364 INFO [StoreOpener-6531f96d53025925be9f24cc17c810ef-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6531f96d53025925be9f24cc17c810ef 2023-07-16 23:15:06,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 215bfba233a6e0e261ee96a214bb7976 2023-07-16 23:15:06,368 DEBUG [StoreOpener-6531f96d53025925be9f24cc17c810ef-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/6531f96d53025925be9f24cc17c810ef/f 2023-07-16 23:15:06,368 DEBUG [StoreOpener-6531f96d53025925be9f24cc17c810ef-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/6531f96d53025925be9f24cc17c810ef/f 2023-07-16 23:15:06,368 INFO [StoreOpener-6531f96d53025925be9f24cc17c810ef-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6531f96d53025925be9f24cc17c810ef columnFamilyName f 2023-07-16 23:15:06,368 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=58 2023-07-16 23:15:06,369 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=58, state=SUCCESS; OpenRegionProcedure 80ba2f12d1a6f6d9c893686c46e53bbe, server=jenkins-hbase4.apache.org,33913,1689549296335 in 227 msec 2023-07-16 23:15:06,369 INFO [StoreOpener-6531f96d53025925be9f24cc17c810ef-1] regionserver.HStore(310): Store=6531f96d53025925be9f24cc17c810ef/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:06,370 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/6531f96d53025925be9f24cc17c810ef 2023-07-16 23:15:06,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/215bfba233a6e0e261ee96a214bb7976/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:06,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/6531f96d53025925be9f24cc17c810ef 2023-07-16 23:15:06,371 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=80ba2f12d1a6f6d9c893686c46e53bbe, ASSIGN in 396 msec 2023-07-16 23:15:06,372 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 215bfba233a6e0e261ee96a214bb7976; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10232683680, jitterRate=-0.04700706899166107}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:06,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 215bfba233a6e0e261ee96a214bb7976: 2023-07-16 23:15:06,373 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976., pid=61, masterSystemTime=1689549306289 2023-07-16 23:15:06,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976. 2023-07-16 23:15:06,376 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976. 2023-07-16 23:15:06,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6531f96d53025925be9f24cc17c810ef 2023-07-16 23:15:06,376 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=215bfba233a6e0e261ee96a214bb7976, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:06,377 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549306376"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549306376"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549306376"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549306376"}]},"ts":"1689549306376"} 2023-07-16 23:15:06,381 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/6531f96d53025925be9f24cc17c810ef/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:06,381 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6531f96d53025925be9f24cc17c810ef; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11061479840, jitterRate=0.030180588364601135}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:06,381 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6531f96d53025925be9f24cc17c810ef: 2023-07-16 23:15:06,382 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=56 2023-07-16 23:15:06,382 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=56, state=SUCCESS; OpenRegionProcedure 215bfba233a6e0e261ee96a214bb7976, server=jenkins-hbase4.apache.org,38989,1689549296125 in 245 msec 2023-07-16 23:15:06,383 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef., pid=62, masterSystemTime=1689549306289 2023-07-16 23:15:06,384 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=215bfba233a6e0e261ee96a214bb7976, ASSIGN in 409 msec 2023-07-16 23:15:06,385 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef. 2023-07-16 23:15:06,385 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef. 2023-07-16 23:15:06,390 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=6531f96d53025925be9f24cc17c810ef, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:06,390 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549306390"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549306390"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549306390"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549306390"}]},"ts":"1689549306390"} 2023-07-16 23:15:06,406 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=57 2023-07-16 23:15:06,406 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=57, state=SUCCESS; OpenRegionProcedure 6531f96d53025925be9f24cc17c810ef, server=jenkins-hbase4.apache.org,33913,1689549296335 in 261 msec 2023-07-16 23:15:06,411 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=55 2023-07-16 23:15:06,412 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6531f96d53025925be9f24cc17c810ef, ASSIGN in 433 msec 2023-07-16 23:15:06,412 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549306412"}]},"ts":"1689549306412"} 2023-07-16 23:15:06,414 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-16 23:15:06,416 DEBUG [PEWorker-3] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-16 23:15:06,433 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=55, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 726 msec 2023-07-16 23:15:06,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-16 23:15:06,809 INFO [Listener at localhost/40131] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 55 completed 2023-07-16 23:15:06,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:06,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:06,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:06,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:06,812 INFO [Listener at localhost/40131] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-16 23:15:06,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-16 23:15:06,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=66, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 23:15:06,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-16 23:15:06,824 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549306824"}]},"ts":"1689549306824"} 2023-07-16 23:15:06,826 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-16 23:15:06,828 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-16 23:15:06,829 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=215bfba233a6e0e261ee96a214bb7976, UNASSIGN}, {pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6531f96d53025925be9f24cc17c810ef, UNASSIGN}, {pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=80ba2f12d1a6f6d9c893686c46e53bbe, UNASSIGN}, {pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=66783cc6591fbacf71c14af590e3317e, UNASSIGN}, {pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35dd369d62dceb76d38ad3136f60206c, UNASSIGN}] 2023-07-16 23:15:06,831 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35dd369d62dceb76d38ad3136f60206c, UNASSIGN 2023-07-16 23:15:06,832 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=80ba2f12d1a6f6d9c893686c46e53bbe, UNASSIGN 2023-07-16 23:15:06,832 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=215bfba233a6e0e261ee96a214bb7976, UNASSIGN 2023-07-16 23:15:06,833 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=66783cc6591fbacf71c14af590e3317e, UNASSIGN 2023-07-16 23:15:06,833 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6531f96d53025925be9f24cc17c810ef, UNASSIGN 2023-07-16 23:15:06,833 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=35dd369d62dceb76d38ad3136f60206c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:06,834 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549306833"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549306833"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549306833"}]},"ts":"1689549306833"} 2023-07-16 23:15:06,834 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=80ba2f12d1a6f6d9c893686c46e53bbe, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:06,834 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=215bfba233a6e0e261ee96a214bb7976, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:06,834 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549306834"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549306834"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549306834"}]},"ts":"1689549306834"} 2023-07-16 23:15:06,834 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549306834"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549306834"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549306834"}]},"ts":"1689549306834"} 2023-07-16 23:15:06,836 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=66783cc6591fbacf71c14af590e3317e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:06,836 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549306836"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549306836"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549306836"}]},"ts":"1689549306836"} 2023-07-16 23:15:06,837 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=6531f96d53025925be9f24cc17c810ef, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:06,837 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549306837"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549306837"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549306837"}]},"ts":"1689549306837"} 2023-07-16 23:15:06,838 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=69, state=RUNNABLE; CloseRegionProcedure 80ba2f12d1a6f6d9c893686c46e53bbe, server=jenkins-hbase4.apache.org,33913,1689549296335}] 2023-07-16 23:15:06,840 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=71, state=RUNNABLE; CloseRegionProcedure 35dd369d62dceb76d38ad3136f60206c, server=jenkins-hbase4.apache.org,33913,1689549296335}] 2023-07-16 23:15:06,840 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=67, state=RUNNABLE; CloseRegionProcedure 215bfba233a6e0e261ee96a214bb7976, server=jenkins-hbase4.apache.org,38989,1689549296125}] 2023-07-16 23:15:06,841 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=70, state=RUNNABLE; CloseRegionProcedure 66783cc6591fbacf71c14af590e3317e, server=jenkins-hbase4.apache.org,38989,1689549296125}] 2023-07-16 23:15:06,842 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=68, state=RUNNABLE; CloseRegionProcedure 6531f96d53025925be9f24cc17c810ef, server=jenkins-hbase4.apache.org,33913,1689549296335}] 2023-07-16 23:15:06,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-16 23:15:06,994 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6531f96d53025925be9f24cc17c810ef 2023-07-16 23:15:06,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6531f96d53025925be9f24cc17c810ef, disabling compactions & flushes 2023-07-16 23:15:06,995 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef. 2023-07-16 23:15:06,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef. 2023-07-16 23:15:06,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef. after waiting 0 ms 2023-07-16 23:15:06,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef. 2023-07-16 23:15:06,997 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 215bfba233a6e0e261ee96a214bb7976 2023-07-16 23:15:06,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 215bfba233a6e0e261ee96a214bb7976, disabling compactions & flushes 2023-07-16 23:15:06,998 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976. 2023-07-16 23:15:06,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976. 2023-07-16 23:15:06,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976. after waiting 0 ms 2023-07-16 23:15:06,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976. 2023-07-16 23:15:07,001 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/6531f96d53025925be9f24cc17c810ef/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:07,002 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef. 2023-07-16 23:15:07,002 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6531f96d53025925be9f24cc17c810ef: 2023-07-16 23:15:07,003 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/215bfba233a6e0e261ee96a214bb7976/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:07,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976. 2023-07-16 23:15:07,004 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 215bfba233a6e0e261ee96a214bb7976: 2023-07-16 23:15:07,005 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=6531f96d53025925be9f24cc17c810ef, regionState=CLOSED 2023-07-16 23:15:07,005 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6531f96d53025925be9f24cc17c810ef 2023-07-16 23:15:07,005 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549307005"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549307005"}]},"ts":"1689549307005"} 2023-07-16 23:15:07,005 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 215bfba233a6e0e261ee96a214bb7976 2023-07-16 23:15:07,005 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 35dd369d62dceb76d38ad3136f60206c 2023-07-16 23:15:07,005 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 66783cc6591fbacf71c14af590e3317e 2023-07-16 23:15:07,006 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 35dd369d62dceb76d38ad3136f60206c, disabling compactions & flushes 2023-07-16 23:15:07,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 66783cc6591fbacf71c14af590e3317e, disabling compactions & flushes 2023-07-16 23:15:07,007 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c. 2023-07-16 23:15:07,007 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e. 2023-07-16 23:15:07,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c. 2023-07-16 23:15:07,008 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=215bfba233a6e0e261ee96a214bb7976, regionState=CLOSED 2023-07-16 23:15:07,008 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c. after waiting 0 ms 2023-07-16 23:15:07,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e. 2023-07-16 23:15:07,008 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549307007"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549307007"}]},"ts":"1689549307007"} 2023-07-16 23:15:07,008 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c. 2023-07-16 23:15:07,008 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e. after waiting 0 ms 2023-07-16 23:15:07,008 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e. 2023-07-16 23:15:07,012 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/35dd369d62dceb76d38ad3136f60206c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:07,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c. 2023-07-16 23:15:07,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 35dd369d62dceb76d38ad3136f60206c: 2023-07-16 23:15:07,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/66783cc6591fbacf71c14af590e3317e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:07,014 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e. 2023-07-16 23:15:07,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 66783cc6591fbacf71c14af590e3317e: 2023-07-16 23:15:07,017 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 35dd369d62dceb76d38ad3136f60206c 2023-07-16 23:15:07,017 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 80ba2f12d1a6f6d9c893686c46e53bbe 2023-07-16 23:15:07,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 80ba2f12d1a6f6d9c893686c46e53bbe, disabling compactions & flushes 2023-07-16 23:15:07,018 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe. 2023-07-16 23:15:07,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe. 2023-07-16 23:15:07,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe. after waiting 0 ms 2023-07-16 23:15:07,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe. 2023-07-16 23:15:07,021 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=68 2023-07-16 23:15:07,021 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=35dd369d62dceb76d38ad3136f60206c, regionState=CLOSED 2023-07-16 23:15:07,021 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=68, state=SUCCESS; CloseRegionProcedure 6531f96d53025925be9f24cc17c810ef, server=jenkins-hbase4.apache.org,33913,1689549296335 in 167 msec 2023-07-16 23:15:07,021 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=67 2023-07-16 23:15:07,022 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689549307021"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549307021"}]},"ts":"1689549307021"} 2023-07-16 23:15:07,022 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=67, state=SUCCESS; CloseRegionProcedure 215bfba233a6e0e261ee96a214bb7976, server=jenkins-hbase4.apache.org,38989,1689549296125 in 170 msec 2023-07-16 23:15:07,022 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 66783cc6591fbacf71c14af590e3317e 2023-07-16 23:15:07,023 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=66783cc6591fbacf71c14af590e3317e, regionState=CLOSED 2023-07-16 23:15:07,023 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549307023"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549307023"}]},"ts":"1689549307023"} 2023-07-16 23:15:07,025 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6531f96d53025925be9f24cc17c810ef, UNASSIGN in 193 msec 2023-07-16 23:15:07,025 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=215bfba233a6e0e261ee96a214bb7976, UNASSIGN in 193 msec 2023-07-16 23:15:07,026 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testTableMoveTruncateAndDrop/80ba2f12d1a6f6d9c893686c46e53bbe/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:07,027 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe. 2023-07-16 23:15:07,027 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 80ba2f12d1a6f6d9c893686c46e53bbe: 2023-07-16 23:15:07,027 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=71 2023-07-16 23:15:07,028 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=71, state=SUCCESS; CloseRegionProcedure 35dd369d62dceb76d38ad3136f60206c, server=jenkins-hbase4.apache.org,33913,1689549296335 in 184 msec 2023-07-16 23:15:07,028 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=70 2023-07-16 23:15:07,029 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=70, state=SUCCESS; CloseRegionProcedure 66783cc6591fbacf71c14af590e3317e, server=jenkins-hbase4.apache.org,38989,1689549296125 in 185 msec 2023-07-16 23:15:07,029 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 80ba2f12d1a6f6d9c893686c46e53bbe 2023-07-16 23:15:07,031 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35dd369d62dceb76d38ad3136f60206c, UNASSIGN in 199 msec 2023-07-16 23:15:07,031 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=80ba2f12d1a6f6d9c893686c46e53bbe, regionState=CLOSED 2023-07-16 23:15:07,031 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689549307031"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549307031"}]},"ts":"1689549307031"} 2023-07-16 23:15:07,031 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=66783cc6591fbacf71c14af590e3317e, UNASSIGN in 199 msec 2023-07-16 23:15:07,034 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=69 2023-07-16 23:15:07,034 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=69, state=SUCCESS; CloseRegionProcedure 80ba2f12d1a6f6d9c893686c46e53bbe, server=jenkins-hbase4.apache.org,33913,1689549296335 in 194 msec 2023-07-16 23:15:07,036 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=66 2023-07-16 23:15:07,036 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=80ba2f12d1a6f6d9c893686c46e53bbe, UNASSIGN in 205 msec 2023-07-16 23:15:07,037 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549307037"}]},"ts":"1689549307037"} 2023-07-16 23:15:07,038 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-16 23:15:07,040 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-16 23:15:07,042 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=66, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 228 msec 2023-07-16 23:15:07,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-16 23:15:07,121 INFO [Listener at localhost/40131] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 66 completed 2023-07-16 23:15:07,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-16 23:15:07,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 23:15:07,137 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 23:15:07,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_1620563459' 2023-07-16 23:15:07,138 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=77, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 23:15:07,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:07,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:07,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:07,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:07,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-16 23:15:07,156 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/215bfba233a6e0e261ee96a214bb7976 2023-07-16 23:15:07,156 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/66783cc6591fbacf71c14af590e3317e 2023-07-16 23:15:07,156 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35dd369d62dceb76d38ad3136f60206c 2023-07-16 23:15:07,156 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/80ba2f12d1a6f6d9c893686c46e53bbe 2023-07-16 23:15:07,156 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6531f96d53025925be9f24cc17c810ef 2023-07-16 23:15:07,159 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/215bfba233a6e0e261ee96a214bb7976/f, FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/215bfba233a6e0e261ee96a214bb7976/recovered.edits] 2023-07-16 23:15:07,160 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/66783cc6591fbacf71c14af590e3317e/f, FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/66783cc6591fbacf71c14af590e3317e/recovered.edits] 2023-07-16 23:15:07,161 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35dd369d62dceb76d38ad3136f60206c/f, FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35dd369d62dceb76d38ad3136f60206c/recovered.edits] 2023-07-16 23:15:07,161 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6531f96d53025925be9f24cc17c810ef/f, FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6531f96d53025925be9f24cc17c810ef/recovered.edits] 2023-07-16 23:15:07,161 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/80ba2f12d1a6f6d9c893686c46e53bbe/f, FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/80ba2f12d1a6f6d9c893686c46e53bbe/recovered.edits] 2023-07-16 23:15:07,176 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/66783cc6591fbacf71c14af590e3317e/recovered.edits/4.seqid to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/archive/data/default/Group_testTableMoveTruncateAndDrop/66783cc6591fbacf71c14af590e3317e/recovered.edits/4.seqid 2023-07-16 23:15:07,176 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/80ba2f12d1a6f6d9c893686c46e53bbe/recovered.edits/4.seqid to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/archive/data/default/Group_testTableMoveTruncateAndDrop/80ba2f12d1a6f6d9c893686c46e53bbe/recovered.edits/4.seqid 2023-07-16 23:15:07,179 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/66783cc6591fbacf71c14af590e3317e 2023-07-16 23:15:07,180 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6531f96d53025925be9f24cc17c810ef/recovered.edits/4.seqid to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/archive/data/default/Group_testTableMoveTruncateAndDrop/6531f96d53025925be9f24cc17c810ef/recovered.edits/4.seqid 2023-07-16 23:15:07,180 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/80ba2f12d1a6f6d9c893686c46e53bbe 2023-07-16 23:15:07,180 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/215bfba233a6e0e261ee96a214bb7976/recovered.edits/4.seqid to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/archive/data/default/Group_testTableMoveTruncateAndDrop/215bfba233a6e0e261ee96a214bb7976/recovered.edits/4.seqid 2023-07-16 23:15:07,180 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6531f96d53025925be9f24cc17c810ef 2023-07-16 23:15:07,181 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35dd369d62dceb76d38ad3136f60206c/recovered.edits/4.seqid to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/archive/data/default/Group_testTableMoveTruncateAndDrop/35dd369d62dceb76d38ad3136f60206c/recovered.edits/4.seqid 2023-07-16 23:15:07,181 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/215bfba233a6e0e261ee96a214bb7976 2023-07-16 23:15:07,182 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35dd369d62dceb76d38ad3136f60206c 2023-07-16 23:15:07,182 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-16 23:15:07,185 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=77, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 23:15:07,192 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-16 23:15:07,196 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-16 23:15:07,197 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=77, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 23:15:07,197 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-16 23:15:07,198 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549307198"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:07,198 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549307198"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:07,198 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549307198"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:07,198 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549307198"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:07,198 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549307198"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:07,203 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-16 23:15:07,203 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 215bfba233a6e0e261ee96a214bb7976, NAME => 'Group_testTableMoveTruncateAndDrop,,1689549305757.215bfba233a6e0e261ee96a214bb7976.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 6531f96d53025925be9f24cc17c810ef, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689549305757.6531f96d53025925be9f24cc17c810ef.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 80ba2f12d1a6f6d9c893686c46e53bbe, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689549305757.80ba2f12d1a6f6d9c893686c46e53bbe.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 66783cc6591fbacf71c14af590e3317e, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689549305757.66783cc6591fbacf71c14af590e3317e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 35dd369d62dceb76d38ad3136f60206c, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689549305757.35dd369d62dceb76d38ad3136f60206c.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-16 23:15:07,203 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-16 23:15:07,203 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689549307203"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:07,205 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-16 23:15:07,208 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=77, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 23:15:07,209 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=77, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 79 msec 2023-07-16 23:15:07,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-16 23:15:07,256 INFO [Listener at localhost/40131] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 77 completed 2023-07-16 23:15:07,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:07,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:07,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:07,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:07,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:07,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:07,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:07,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:07,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:07,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:07,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:07,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:07,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 23:15:07,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:07,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:07,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:07,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:07,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989] to rsgroup default 2023-07-16 23:15:07,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:07,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:07,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:07,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_1620563459, current retry=0 2023-07-16 23:15:07,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33913,1689549296335, jenkins-hbase4.apache.org,38989,1689549296125] are moved back to Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:07,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_1620563459 => default 2023-07-16 23:15:07,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:07,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_1620563459 2023-07-16 23:15:07,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:07,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:07,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:07,305 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:07,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:07,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:07,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:07,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:07,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:07,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:07,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:07,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37359] to rsgroup master 2023-07-16 23:15:07,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:07,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 147 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42846 deadline: 1689550507319, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. 2023-07-16 23:15:07,320 WARN [Listener at localhost/40131] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:07,322 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:07,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:07,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:07,324 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989, jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:43561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:07,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:07,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:07,354 INFO [Listener at localhost/40131] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=504 (was 423) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43561 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:34675 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1339359975-172.31.14.131-1689549290377:blk_1073741844_1020, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp276554477-638-acceptor-0@433a98c0-ServerConnector@4c8b9b{HTTP/1.1, (http/1.1)}{0.0.0.0:41531} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1996744945_17 at /127.0.0.1:45254 [Waiting for operation #12] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43561 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp276554477-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63904@0x2a199d1b-SendThread(127.0.0.1:63904) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS:3;jenkins-hbase4:43561 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002-prefix:jenkins-hbase4.apache.org,43561,1689549300217.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_107485817_17 at /127.0.0.1:33816 [Receiving block BP-1339359975-172.31.14.131-1689549290377:blk_1073741844_1020] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1339359975-172.31.14.131-1689549290377:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43561 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:34675 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp276554477-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cf74ee0-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43561 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-7cab2f4f-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp276554477-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp276554477-637 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1339359975-172.31.14.131-1689549290377:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43561 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1339359975-172.31.14.131-1689549290377:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43561 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cf74ee0-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_107485817_17 at /127.0.0.1:44610 [Receiving block BP-1339359975-172.31.14.131-1689549290377:blk_1073741844_1020] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1339359975-172.31.14.131-1689549290377:blk_1073741844_1020, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_107485817_17 at /127.0.0.1:33774 [Receiving block BP-1339359975-172.31.14.131-1689549290377:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:43561Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63904@0x2a199d1b-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp276554477-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_107485817_17 at /127.0.0.1:43556 [Receiving block BP-1339359975-172.31.14.131-1689549290377:blk_1073741844_1020] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp276554477-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63904@0x2a199d1b sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/361900993.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp276554477-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_107485817_17 at /127.0.0.1:43504 [Receiving block BP-1339359975-172.31.14.131-1689549290377:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43561 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-4-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:43561-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002-prefix:jenkins-hbase4.apache.org,43561,1689549300217 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_107485817_17 at /127.0.0.1:44588 [Receiving block BP-1339359975-172.31.14.131-1689549290377:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43561 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43561 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43561 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1758400770_17 at /127.0.0.1:44636 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1339359975-172.31.14.131-1689549290377:blk_1073741844_1020, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=807 (was 698) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=426 (was 432), ProcessCount=176 (was 178), AvailableMemoryMB=3303 (was 3760) 2023-07-16 23:15:07,354 WARN [Listener at localhost/40131] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-16 23:15:07,373 INFO [Listener at localhost/40131] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=504, OpenFileDescriptor=807, MaxFileDescriptor=60000, SystemLoadAverage=426, ProcessCount=176, AvailableMemoryMB=3301 2023-07-16 23:15:07,373 WARN [Listener at localhost/40131] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-16 23:15:07,374 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-16 23:15:07,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:07,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:07,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:07,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:07,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:07,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:07,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:07,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:07,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:07,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:07,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:07,397 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:07,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:07,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:07,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:07,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:07,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:07,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:07,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:07,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37359] to rsgroup master 2023-07-16 23:15:07,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:07,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 175 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42846 deadline: 1689550507410, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. 2023-07-16 23:15:07,411 WARN [Listener at localhost/40131] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:07,413 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:07,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:07,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:07,414 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989, jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:43561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:07,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:07,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:07,416 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-16 23:15:07,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:07,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 181 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:42846 deadline: 1689550507416, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-16 23:15:07,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-16 23:15:07,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:07,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 183 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:42846 deadline: 1689550507418, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-16 23:15:07,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-16 23:15:07,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:07,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:42846 deadline: 1689550507419, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-16 23:15:07,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-16 23:15:07,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-16 23:15:07,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:07,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:07,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:07,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:07,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:07,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:07,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:07,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:07,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:07,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:07,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:07,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:07,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:07,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-16 23:15:07,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:07,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:07,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 23:15:07,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:07,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:07,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:07,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:07,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:07,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:07,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:07,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:07,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:07,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:07,476 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:07,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:07,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:07,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:07,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:07,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:07,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:07,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:07,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37359] to rsgroup master 2023-07-16 23:15:07,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:07,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 219 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42846 deadline: 1689550507491, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. 2023-07-16 23:15:07,492 WARN [Listener at localhost/40131] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:07,494 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:07,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:07,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:07,496 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989, jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:43561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:07,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:07,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:07,520 INFO [Listener at localhost/40131] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=507 (was 504) Potentially hanging thread: hconnection-0x2cf74ee0-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cf74ee0-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cf74ee0-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=807 (was 807), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=426 (was 426), ProcessCount=176 (was 176), AvailableMemoryMB=3294 (was 3301) 2023-07-16 23:15:07,520 WARN [Listener at localhost/40131] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-16 23:15:07,540 INFO [Listener at localhost/40131] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=507, OpenFileDescriptor=807, MaxFileDescriptor=60000, SystemLoadAverage=426, ProcessCount=176, AvailableMemoryMB=3293 2023-07-16 23:15:07,540 WARN [Listener at localhost/40131] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-16 23:15:07,540 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-16 23:15:07,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:07,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:07,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:07,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:07,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:07,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:07,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:07,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:07,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:07,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:07,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:07,560 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:07,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:07,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:07,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:07,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:07,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:07,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:07,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:07,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37359] to rsgroup master 2023-07-16 23:15:07,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:07,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 247 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42846 deadline: 1689550507577, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. 2023-07-16 23:15:07,578 WARN [Listener at localhost/40131] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:07,580 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:07,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:07,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:07,581 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989, jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:43561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:07,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:07,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:07,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:07,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:07,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:07,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:07,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-16 23:15:07,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:07,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 23:15:07,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:07,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:07,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:07,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:07,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:07,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989] to rsgroup bar 2023-07-16 23:15:07,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:07,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 23:15:07,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:07,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:07,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(238): Moving server region 246728e01e8e564172b05cb8c4263f93, which do not belong to RSGroup bar 2023-07-16 23:15:07,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=246728e01e8e564172b05cb8c4263f93, REOPEN/MOVE 2023-07-16 23:15:07,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-16 23:15:07,607 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=246728e01e8e564172b05cb8c4263f93, REOPEN/MOVE 2023-07-16 23:15:07,608 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=246728e01e8e564172b05cb8c4263f93, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:07,608 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689549307608"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549307608"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549307608"}]},"ts":"1689549307608"} 2023-07-16 23:15:07,610 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE; CloseRegionProcedure 246728e01e8e564172b05cb8c4263f93, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:07,765 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 246728e01e8e564172b05cb8c4263f93 2023-07-16 23:15:07,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 246728e01e8e564172b05cb8c4263f93, disabling compactions & flushes 2023-07-16 23:15:07,767 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. 2023-07-16 23:15:07,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. 2023-07-16 23:15:07,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. after waiting 0 ms 2023-07-16 23:15:07,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. 2023-07-16 23:15:07,767 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 246728e01e8e564172b05cb8c4263f93 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-16 23:15:07,793 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/namespace/246728e01e8e564172b05cb8c4263f93/.tmp/info/61a9d98018f148219aab802b445f9b04 2023-07-16 23:15:07,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/namespace/246728e01e8e564172b05cb8c4263f93/.tmp/info/61a9d98018f148219aab802b445f9b04 as hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/namespace/246728e01e8e564172b05cb8c4263f93/info/61a9d98018f148219aab802b445f9b04 2023-07-16 23:15:07,816 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/namespace/246728e01e8e564172b05cb8c4263f93/info/61a9d98018f148219aab802b445f9b04, entries=2, sequenceid=6, filesize=4.8 K 2023-07-16 23:15:07,817 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 246728e01e8e564172b05cb8c4263f93 in 50ms, sequenceid=6, compaction requested=false 2023-07-16 23:15:07,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/namespace/246728e01e8e564172b05cb8c4263f93/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-16 23:15:07,832 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. 2023-07-16 23:15:07,832 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 246728e01e8e564172b05cb8c4263f93: 2023-07-16 23:15:07,832 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 246728e01e8e564172b05cb8c4263f93 move to jenkins-hbase4.apache.org,43561,1689549300217 record at close sequenceid=6 2023-07-16 23:15:07,836 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 246728e01e8e564172b05cb8c4263f93 2023-07-16 23:15:07,837 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=246728e01e8e564172b05cb8c4263f93, regionState=CLOSED 2023-07-16 23:15:07,837 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689549307837"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549307837"}]},"ts":"1689549307837"} 2023-07-16 23:15:07,841 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-16 23:15:07,841 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; CloseRegionProcedure 246728e01e8e564172b05cb8c4263f93, server=jenkins-hbase4.apache.org,41683,1689549296507 in 229 msec 2023-07-16 23:15:07,841 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=246728e01e8e564172b05cb8c4263f93, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43561,1689549300217; forceNewPlan=false, retain=false 2023-07-16 23:15:07,992 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=246728e01e8e564172b05cb8c4263f93, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:07,992 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689549307992"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549307992"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549307992"}]},"ts":"1689549307992"} 2023-07-16 23:15:07,996 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=78, state=RUNNABLE; OpenRegionProcedure 246728e01e8e564172b05cb8c4263f93, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:08,157 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. 2023-07-16 23:15:08,158 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 246728e01e8e564172b05cb8c4263f93, NAME => 'hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:08,158 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 246728e01e8e564172b05cb8c4263f93 2023-07-16 23:15:08,158 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:08,158 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 246728e01e8e564172b05cb8c4263f93 2023-07-16 23:15:08,159 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 246728e01e8e564172b05cb8c4263f93 2023-07-16 23:15:08,161 INFO [StoreOpener-246728e01e8e564172b05cb8c4263f93-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 246728e01e8e564172b05cb8c4263f93 2023-07-16 23:15:08,162 DEBUG [StoreOpener-246728e01e8e564172b05cb8c4263f93-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/namespace/246728e01e8e564172b05cb8c4263f93/info 2023-07-16 23:15:08,162 DEBUG [StoreOpener-246728e01e8e564172b05cb8c4263f93-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/namespace/246728e01e8e564172b05cb8c4263f93/info 2023-07-16 23:15:08,163 INFO [StoreOpener-246728e01e8e564172b05cb8c4263f93-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 246728e01e8e564172b05cb8c4263f93 columnFamilyName info 2023-07-16 23:15:08,173 DEBUG [StoreOpener-246728e01e8e564172b05cb8c4263f93-1] regionserver.HStore(539): loaded hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/namespace/246728e01e8e564172b05cb8c4263f93/info/61a9d98018f148219aab802b445f9b04 2023-07-16 23:15:08,173 INFO [StoreOpener-246728e01e8e564172b05cb8c4263f93-1] regionserver.HStore(310): Store=246728e01e8e564172b05cb8c4263f93/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:08,174 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/namespace/246728e01e8e564172b05cb8c4263f93 2023-07-16 23:15:08,176 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/namespace/246728e01e8e564172b05cb8c4263f93 2023-07-16 23:15:08,180 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 246728e01e8e564172b05cb8c4263f93 2023-07-16 23:15:08,182 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 246728e01e8e564172b05cb8c4263f93; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9463081120, jitterRate=-0.1186818927526474}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:08,182 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 246728e01e8e564172b05cb8c4263f93: 2023-07-16 23:15:08,183 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93., pid=80, masterSystemTime=1689549308151 2023-07-16 23:15:08,185 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. 2023-07-16 23:15:08,185 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. 2023-07-16 23:15:08,186 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=246728e01e8e564172b05cb8c4263f93, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:08,186 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689549308186"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549308186"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549308186"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549308186"}]},"ts":"1689549308186"} 2023-07-16 23:15:08,190 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=78 2023-07-16 23:15:08,190 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=78, state=SUCCESS; OpenRegionProcedure 246728e01e8e564172b05cb8c4263f93, server=jenkins-hbase4.apache.org,43561,1689549300217 in 192 msec 2023-07-16 23:15:08,192 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=246728e01e8e564172b05cb8c4263f93, REOPEN/MOVE in 585 msec 2023-07-16 23:15:08,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure.ProcedureSyncWait(216): waitFor pid=78 2023-07-16 23:15:08,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33913,1689549296335, jenkins-hbase4.apache.org,38989,1689549296125, jenkins-hbase4.apache.org,41683,1689549296507] are moved back to default 2023-07-16 23:15:08,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-16 23:15:08,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:08,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:08,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:08,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-16 23:15:08,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:08,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:08,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-16 23:15:08,624 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 23:15:08,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 81 2023-07-16 23:15:08,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-16 23:15:08,628 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:08,629 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 23:15:08,629 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:08,630 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:08,634 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 23:15:08,636 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:08,636 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90 empty. 2023-07-16 23:15:08,637 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:08,637 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-16 23:15:08,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-16 23:15:08,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-16 23:15:09,066 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-16 23:15:09,067 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => b032c73861ba41c5011ccef631c6ae90, NAME => 'Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:09,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-16 23:15:09,237 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-16 23:15:09,481 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:09,481 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing b032c73861ba41c5011ccef631c6ae90, disabling compactions & flushes 2023-07-16 23:15:09,481 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:09,481 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:09,482 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. after waiting 0 ms 2023-07-16 23:15:09,482 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:09,482 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:09,482 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for b032c73861ba41c5011ccef631c6ae90: 2023-07-16 23:15:09,484 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 23:15:09,485 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689549309485"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549309485"}]},"ts":"1689549309485"} 2023-07-16 23:15:09,487 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 23:15:09,488 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 23:15:09,488 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549309488"}]},"ts":"1689549309488"} 2023-07-16 23:15:09,489 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-16 23:15:09,497 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b032c73861ba41c5011ccef631c6ae90, ASSIGN}] 2023-07-16 23:15:09,500 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b032c73861ba41c5011ccef631c6ae90, ASSIGN 2023-07-16 23:15:09,501 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b032c73861ba41c5011ccef631c6ae90, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43561,1689549300217; forceNewPlan=false, retain=false 2023-07-16 23:15:09,653 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=b032c73861ba41c5011ccef631c6ae90, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:09,653 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689549309653"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549309653"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549309653"}]},"ts":"1689549309653"} 2023-07-16 23:15:09,659 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; OpenRegionProcedure b032c73861ba41c5011ccef631c6ae90, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:09,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-16 23:15:09,817 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:09,818 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b032c73861ba41c5011ccef631c6ae90, NAME => 'Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:09,818 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:09,818 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:09,818 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:09,818 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:09,820 INFO [StoreOpener-b032c73861ba41c5011ccef631c6ae90-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:09,822 DEBUG [StoreOpener-b032c73861ba41c5011ccef631c6ae90-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90/f 2023-07-16 23:15:09,822 DEBUG [StoreOpener-b032c73861ba41c5011ccef631c6ae90-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90/f 2023-07-16 23:15:09,822 INFO [StoreOpener-b032c73861ba41c5011ccef631c6ae90-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b032c73861ba41c5011ccef631c6ae90 columnFamilyName f 2023-07-16 23:15:09,824 INFO [StoreOpener-b032c73861ba41c5011ccef631c6ae90-1] regionserver.HStore(310): Store=b032c73861ba41c5011ccef631c6ae90/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:09,825 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:09,825 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:09,829 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:09,831 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:09,832 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b032c73861ba41c5011ccef631c6ae90; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10297708000, jitterRate=-0.040951207280159}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:09,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b032c73861ba41c5011ccef631c6ae90: 2023-07-16 23:15:09,833 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90., pid=83, masterSystemTime=1689549309814 2023-07-16 23:15:09,834 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:09,834 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:09,835 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=b032c73861ba41c5011ccef631c6ae90, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:09,835 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689549309835"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549309835"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549309835"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549309835"}]},"ts":"1689549309835"} 2023-07-16 23:15:09,840 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-16 23:15:09,840 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; OpenRegionProcedure b032c73861ba41c5011ccef631c6ae90, server=jenkins-hbase4.apache.org,43561,1689549300217 in 178 msec 2023-07-16 23:15:09,842 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-16 23:15:09,842 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b032c73861ba41c5011ccef631c6ae90, ASSIGN in 343 msec 2023-07-16 23:15:09,843 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 23:15:09,843 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549309843"}]},"ts":"1689549309843"} 2023-07-16 23:15:09,844 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-16 23:15:09,847 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 23:15:09,850 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 1.2270 sec 2023-07-16 23:15:10,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-16 23:15:10,735 INFO [Listener at localhost/40131] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-16 23:15:10,735 DEBUG [Listener at localhost/40131] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-16 23:15:10,735 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:10,744 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-16 23:15:10,744 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:10,744 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-16 23:15:10,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-16 23:15:10,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:10,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 23:15:10,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:10,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:10,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-16 23:15:10,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(345): Moving region b032c73861ba41c5011ccef631c6ae90 to RSGroup bar 2023-07-16 23:15:10,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:10,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:10,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:10,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:15:10,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-16 23:15:10,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:10,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b032c73861ba41c5011ccef631c6ae90, REOPEN/MOVE 2023-07-16 23:15:10,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-16 23:15:10,757 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b032c73861ba41c5011ccef631c6ae90, REOPEN/MOVE 2023-07-16 23:15:10,758 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=b032c73861ba41c5011ccef631c6ae90, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:10,758 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689549310758"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549310758"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549310758"}]},"ts":"1689549310758"} 2023-07-16 23:15:10,759 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure b032c73861ba41c5011ccef631c6ae90, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:10,913 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:10,915 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b032c73861ba41c5011ccef631c6ae90, disabling compactions & flushes 2023-07-16 23:15:10,915 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:10,915 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:10,915 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. after waiting 0 ms 2023-07-16 23:15:10,915 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:10,924 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:10,925 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:10,925 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b032c73861ba41c5011ccef631c6ae90: 2023-07-16 23:15:10,925 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b032c73861ba41c5011ccef631c6ae90 move to jenkins-hbase4.apache.org,38989,1689549296125 record at close sequenceid=2 2023-07-16 23:15:10,927 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:10,928 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=b032c73861ba41c5011ccef631c6ae90, regionState=CLOSED 2023-07-16 23:15:10,928 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689549310928"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549310928"}]},"ts":"1689549310928"} 2023-07-16 23:15:10,933 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-16 23:15:10,933 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure b032c73861ba41c5011ccef631c6ae90, server=jenkins-hbase4.apache.org,43561,1689549300217 in 171 msec 2023-07-16 23:15:10,934 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b032c73861ba41c5011ccef631c6ae90, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38989,1689549296125; forceNewPlan=false, retain=false 2023-07-16 23:15:11,084 INFO [jenkins-hbase4:37359] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 23:15:11,085 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=b032c73861ba41c5011ccef631c6ae90, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:11,085 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689549311084"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549311084"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549311084"}]},"ts":"1689549311084"} 2023-07-16 23:15:11,087 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure b032c73861ba41c5011ccef631c6ae90, server=jenkins-hbase4.apache.org,38989,1689549296125}] 2023-07-16 23:15:11,249 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:11,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b032c73861ba41c5011ccef631c6ae90, NAME => 'Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:11,250 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:11,250 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:11,250 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:11,250 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:11,253 INFO [StoreOpener-b032c73861ba41c5011ccef631c6ae90-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:11,254 DEBUG [StoreOpener-b032c73861ba41c5011ccef631c6ae90-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90/f 2023-07-16 23:15:11,255 DEBUG [StoreOpener-b032c73861ba41c5011ccef631c6ae90-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90/f 2023-07-16 23:15:11,255 INFO [StoreOpener-b032c73861ba41c5011ccef631c6ae90-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b032c73861ba41c5011ccef631c6ae90 columnFamilyName f 2023-07-16 23:15:11,256 INFO [StoreOpener-b032c73861ba41c5011ccef631c6ae90-1] regionserver.HStore(310): Store=b032c73861ba41c5011ccef631c6ae90/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:11,257 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:11,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:11,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:11,263 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b032c73861ba41c5011ccef631c6ae90; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10913895840, jitterRate=0.016435757279396057}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:11,263 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b032c73861ba41c5011ccef631c6ae90: 2023-07-16 23:15:11,264 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90., pid=86, masterSystemTime=1689549311243 2023-07-16 23:15:11,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:11,266 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:11,267 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=b032c73861ba41c5011ccef631c6ae90, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:11,267 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689549311266"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549311266"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549311266"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549311266"}]},"ts":"1689549311266"} 2023-07-16 23:15:11,270 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-16 23:15:11,271 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure b032c73861ba41c5011ccef631c6ae90, server=jenkins-hbase4.apache.org,38989,1689549296125 in 181 msec 2023-07-16 23:15:11,274 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b032c73861ba41c5011ccef631c6ae90, REOPEN/MOVE in 516 msec 2023-07-16 23:15:11,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-16 23:15:11,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-16 23:15:11,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:11,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:11,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:11,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-16 23:15:11,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:11,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-16 23:15:11,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:11,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 289 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:42846 deadline: 1689550511765, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-16 23:15:11,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989] to rsgroup default 2023-07-16 23:15:11,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:11,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 291 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:42846 deadline: 1689550511767, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-16 23:15:11,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-16 23:15:11,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:11,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 23:15:11,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:11,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:11,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-16 23:15:11,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(345): Moving region b032c73861ba41c5011ccef631c6ae90 to RSGroup default 2023-07-16 23:15:11,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b032c73861ba41c5011ccef631c6ae90, REOPEN/MOVE 2023-07-16 23:15:11,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-16 23:15:11,778 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b032c73861ba41c5011ccef631c6ae90, REOPEN/MOVE 2023-07-16 23:15:11,779 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=b032c73861ba41c5011ccef631c6ae90, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:11,779 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689549311779"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549311779"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549311779"}]},"ts":"1689549311779"} 2023-07-16 23:15:11,783 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure b032c73861ba41c5011ccef631c6ae90, server=jenkins-hbase4.apache.org,38989,1689549296125}] 2023-07-16 23:15:11,936 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:11,939 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b032c73861ba41c5011ccef631c6ae90, disabling compactions & flushes 2023-07-16 23:15:11,939 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:11,939 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:11,939 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. after waiting 0 ms 2023-07-16 23:15:11,939 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:11,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 23:15:11,944 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:11,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b032c73861ba41c5011ccef631c6ae90: 2023-07-16 23:15:11,944 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b032c73861ba41c5011ccef631c6ae90 move to jenkins-hbase4.apache.org,43561,1689549300217 record at close sequenceid=5 2023-07-16 23:15:11,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:11,947 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=b032c73861ba41c5011ccef631c6ae90, regionState=CLOSED 2023-07-16 23:15:11,947 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689549311947"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549311947"}]},"ts":"1689549311947"} 2023-07-16 23:15:11,950 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-16 23:15:11,950 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure b032c73861ba41c5011ccef631c6ae90, server=jenkins-hbase4.apache.org,38989,1689549296125 in 167 msec 2023-07-16 23:15:11,950 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b032c73861ba41c5011ccef631c6ae90, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43561,1689549300217; forceNewPlan=false, retain=false 2023-07-16 23:15:12,101 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=b032c73861ba41c5011ccef631c6ae90, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:12,101 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689549312101"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549312101"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549312101"}]},"ts":"1689549312101"} 2023-07-16 23:15:12,103 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure b032c73861ba41c5011ccef631c6ae90, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:12,259 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:12,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b032c73861ba41c5011ccef631c6ae90, NAME => 'Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:12,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:12,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:12,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:12,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:12,269 INFO [StoreOpener-b032c73861ba41c5011ccef631c6ae90-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:12,271 DEBUG [StoreOpener-b032c73861ba41c5011ccef631c6ae90-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90/f 2023-07-16 23:15:12,271 DEBUG [StoreOpener-b032c73861ba41c5011ccef631c6ae90-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90/f 2023-07-16 23:15:12,271 INFO [StoreOpener-b032c73861ba41c5011ccef631c6ae90-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b032c73861ba41c5011ccef631c6ae90 columnFamilyName f 2023-07-16 23:15:12,272 INFO [StoreOpener-b032c73861ba41c5011ccef631c6ae90-1] regionserver.HStore(310): Store=b032c73861ba41c5011ccef631c6ae90/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:12,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:12,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:12,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:12,279 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b032c73861ba41c5011ccef631c6ae90; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10414044160, jitterRate=-0.030116558074951172}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:12,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b032c73861ba41c5011ccef631c6ae90: 2023-07-16 23:15:12,280 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90., pid=89, masterSystemTime=1689549312255 2023-07-16 23:15:12,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:12,282 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:12,282 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=b032c73861ba41c5011ccef631c6ae90, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:12,283 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689549312282"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549312282"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549312282"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549312282"}]},"ts":"1689549312282"} 2023-07-16 23:15:12,286 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-16 23:15:12,286 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure b032c73861ba41c5011ccef631c6ae90, server=jenkins-hbase4.apache.org,43561,1689549300217 in 181 msec 2023-07-16 23:15:12,287 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b032c73861ba41c5011ccef631c6ae90, REOPEN/MOVE in 510 msec 2023-07-16 23:15:12,535 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testFailRemoveGroup' 2023-07-16 23:15:12,535 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-16 23:15:12,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-16 23:15:12,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-16 23:15:12,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:12,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:12,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:12,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-16 23:15:12,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:12,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 298 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:42846 deadline: 1689550512785, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-16 23:15:12,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989] to rsgroup default 2023-07-16 23:15:12,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:12,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 23:15:12,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:12,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:12,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-16 23:15:12,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33913,1689549296335, jenkins-hbase4.apache.org,38989,1689549296125, jenkins-hbase4.apache.org,41683,1689549296507] are moved back to bar 2023-07-16 23:15:12,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-16 23:15:12,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:12,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:12,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:12,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-16 23:15:12,798 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41683] ipc.CallRunner(144): callId: 213 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:49456 deadline: 1689549372798, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43561 startCode=1689549300217. As of locationSeqNum=6. 2023-07-16 23:15:12,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:12,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:12,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 23:15:12,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:12,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:12,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:12,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:12,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:12,927 INFO [Listener at localhost/40131] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-16 23:15:12,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-16 23:15:12,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-16 23:15:12,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-16 23:15:12,935 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549312935"}]},"ts":"1689549312935"} 2023-07-16 23:15:12,937 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-16 23:15:12,939 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-16 23:15:12,940 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b032c73861ba41c5011ccef631c6ae90, UNASSIGN}] 2023-07-16 23:15:12,942 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b032c73861ba41c5011ccef631c6ae90, UNASSIGN 2023-07-16 23:15:12,943 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=b032c73861ba41c5011ccef631c6ae90, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:12,943 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689549312943"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549312943"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549312943"}]},"ts":"1689549312943"} 2023-07-16 23:15:12,945 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE; CloseRegionProcedure b032c73861ba41c5011ccef631c6ae90, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:13,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-16 23:15:13,098 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:13,099 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b032c73861ba41c5011ccef631c6ae90, disabling compactions & flushes 2023-07-16 23:15:13,099 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:13,099 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:13,099 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. after waiting 0 ms 2023-07-16 23:15:13,099 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:13,111 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-16 23:15:13,112 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90. 2023-07-16 23:15:13,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b032c73861ba41c5011ccef631c6ae90: 2023-07-16 23:15:13,114 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:13,115 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=b032c73861ba41c5011ccef631c6ae90, regionState=CLOSED 2023-07-16 23:15:13,115 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689549313115"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549313115"}]},"ts":"1689549313115"} 2023-07-16 23:15:13,121 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-16 23:15:13,121 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; CloseRegionProcedure b032c73861ba41c5011ccef631c6ae90, server=jenkins-hbase4.apache.org,43561,1689549300217 in 172 msec 2023-07-16 23:15:13,123 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-16 23:15:13,123 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b032c73861ba41c5011ccef631c6ae90, UNASSIGN in 181 msec 2023-07-16 23:15:13,124 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549313124"}]},"ts":"1689549313124"} 2023-07-16 23:15:13,125 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-16 23:15:13,127 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-16 23:15:13,130 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 200 msec 2023-07-16 23:15:13,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-16 23:15:13,236 INFO [Listener at localhost/40131] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-16 23:15:13,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-16 23:15:13,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 23:15:13,240 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 23:15:13,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-16 23:15:13,241 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=93, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 23:15:13,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:13,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:13,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:13,247 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:13,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-16 23:15:13,257 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90/f, FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90/recovered.edits] 2023-07-16 23:15:13,265 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90/recovered.edits/10.seqid to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/archive/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90/recovered.edits/10.seqid 2023-07-16 23:15:13,266 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testFailRemoveGroup/b032c73861ba41c5011ccef631c6ae90 2023-07-16 23:15:13,266 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-16 23:15:13,270 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=93, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 23:15:13,273 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-16 23:15:13,275 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-16 23:15:13,277 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=93, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 23:15:13,277 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-16 23:15:13,277 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549313277"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:13,279 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 23:15:13,279 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => b032c73861ba41c5011ccef631c6ae90, NAME => 'Group_testFailRemoveGroup,,1689549308621.b032c73861ba41c5011ccef631c6ae90.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 23:15:13,280 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-16 23:15:13,280 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689549313280"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:13,282 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-16 23:15:13,284 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=93, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 23:15:13,285 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 47 msec 2023-07-16 23:15:13,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-16 23:15:13,349 INFO [Listener at localhost/40131] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-16 23:15:13,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:13,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:13,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:13,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:13,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:13,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:13,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:13,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:13,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:13,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:13,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:13,368 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:13,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:13,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:13,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:13,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:13,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:13,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:13,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:13,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37359] to rsgroup master 2023-07-16 23:15:13,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:13,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 346 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42846 deadline: 1689550513380, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. 2023-07-16 23:15:13,381 WARN [Listener at localhost/40131] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:13,382 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:13,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:13,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:13,384 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989, jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:43561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:13,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:13,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:13,402 INFO [Listener at localhost/40131] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=509 (was 507) Potentially hanging thread: hconnection-0x2cf74ee0-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cf74ee0-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1996744945_17 at /127.0.0.1:50198 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x29a77039-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cf74ee0-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/cluster_b14fde1a-1c3e-bdee-d7b9-5694b71ef229/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_107485817_17 at /127.0.0.1:43706 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/cluster_b14fde1a-1c3e-bdee-d7b9-5694b71ef229/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cf74ee0-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cf74ee0-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=808 (was 807) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=424 (was 426), ProcessCount=176 (was 176), AvailableMemoryMB=2996 (was 3293) 2023-07-16 23:15:13,403 WARN [Listener at localhost/40131] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-16 23:15:13,419 INFO [Listener at localhost/40131] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=509, OpenFileDescriptor=808, MaxFileDescriptor=60000, SystemLoadAverage=424, ProcessCount=176, AvailableMemoryMB=2994 2023-07-16 23:15:13,420 WARN [Listener at localhost/40131] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-16 23:15:13,420 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-16 23:15:13,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:13,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:13,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:13,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:13,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:13,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:13,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:13,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:13,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:13,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:13,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:13,435 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:13,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:13,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:13,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:13,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:13,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:13,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:13,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:13,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37359] to rsgroup master 2023-07-16 23:15:13,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:13,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 374 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42846 deadline: 1689550513447, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. 2023-07-16 23:15:13,448 WARN [Listener at localhost/40131] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:13,452 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:13,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:13,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:13,453 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989, jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:43561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:13,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:13,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:13,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:13,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:13,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_1188213360 2023-07-16 23:15:13,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:13,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:13,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1188213360 2023-07-16 23:15:13,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:13,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:13,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:13,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:13,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33913] to rsgroup Group_testMultiTableMove_1188213360 2023-07-16 23:15:13,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:13,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1188213360 2023-07-16 23:15:13,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:13,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:13,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 23:15:13,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33913,1689549296335] are moved back to default 2023-07-16 23:15:13,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_1188213360 2023-07-16 23:15:13,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:13,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:13,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:13,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1188213360 2023-07-16 23:15:13,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:13,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:13,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 23:15:13,483 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 23:15:13,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 94 2023-07-16 23:15:13,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-16 23:15:13,485 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:13,486 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1188213360 2023-07-16 23:15:13,486 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:13,486 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:13,491 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 23:15:13,493 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:13,493 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243 empty. 2023-07-16 23:15:13,494 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:13,494 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-16 23:15:13,509 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-16 23:15:13,510 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => ccd5b343284bb60b014fa4360b1ec243, NAME => 'GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:13,521 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:13,521 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing ccd5b343284bb60b014fa4360b1ec243, disabling compactions & flushes 2023-07-16 23:15:13,521 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. 2023-07-16 23:15:13,521 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. 2023-07-16 23:15:13,521 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. after waiting 0 ms 2023-07-16 23:15:13,521 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. 2023-07-16 23:15:13,521 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. 2023-07-16 23:15:13,521 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for ccd5b343284bb60b014fa4360b1ec243: 2023-07-16 23:15:13,523 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 23:15:13,524 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689549313524"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549313524"}]},"ts":"1689549313524"} 2023-07-16 23:15:13,526 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 23:15:13,527 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 23:15:13,527 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549313527"}]},"ts":"1689549313527"} 2023-07-16 23:15:13,528 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-16 23:15:13,532 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:13,532 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:13,532 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:13,532 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:15:13,532 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:13,532 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ccd5b343284bb60b014fa4360b1ec243, ASSIGN}] 2023-07-16 23:15:13,534 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ccd5b343284bb60b014fa4360b1ec243, ASSIGN 2023-07-16 23:15:13,535 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ccd5b343284bb60b014fa4360b1ec243, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41683,1689549296507; forceNewPlan=false, retain=false 2023-07-16 23:15:13,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-16 23:15:13,685 INFO [jenkins-hbase4:37359] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 23:15:13,687 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=ccd5b343284bb60b014fa4360b1ec243, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:13,687 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689549313687"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549313687"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549313687"}]},"ts":"1689549313687"} 2023-07-16 23:15:13,689 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure ccd5b343284bb60b014fa4360b1ec243, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:13,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-16 23:15:13,844 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. 2023-07-16 23:15:13,844 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ccd5b343284bb60b014fa4360b1ec243, NAME => 'GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:13,844 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:13,845 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:13,845 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:13,845 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:13,846 INFO [StoreOpener-ccd5b343284bb60b014fa4360b1ec243-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:13,848 DEBUG [StoreOpener-ccd5b343284bb60b014fa4360b1ec243-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243/f 2023-07-16 23:15:13,848 DEBUG [StoreOpener-ccd5b343284bb60b014fa4360b1ec243-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243/f 2023-07-16 23:15:13,848 INFO [StoreOpener-ccd5b343284bb60b014fa4360b1ec243-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ccd5b343284bb60b014fa4360b1ec243 columnFamilyName f 2023-07-16 23:15:13,849 INFO [StoreOpener-ccd5b343284bb60b014fa4360b1ec243-1] regionserver.HStore(310): Store=ccd5b343284bb60b014fa4360b1ec243/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:13,850 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:13,850 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:13,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:13,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:13,855 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ccd5b343284bb60b014fa4360b1ec243; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10073015040, jitterRate=-0.06187736988067627}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:13,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ccd5b343284bb60b014fa4360b1ec243: 2023-07-16 23:15:13,856 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243., pid=96, masterSystemTime=1689549313840 2023-07-16 23:15:13,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. 2023-07-16 23:15:13,857 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. 2023-07-16 23:15:13,858 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=ccd5b343284bb60b014fa4360b1ec243, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:13,858 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689549313857"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549313857"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549313857"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549313857"}]},"ts":"1689549313857"} 2023-07-16 23:15:13,861 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-16 23:15:13,861 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure ccd5b343284bb60b014fa4360b1ec243, server=jenkins-hbase4.apache.org,41683,1689549296507 in 170 msec 2023-07-16 23:15:13,862 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-16 23:15:13,862 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ccd5b343284bb60b014fa4360b1ec243, ASSIGN in 329 msec 2023-07-16 23:15:13,863 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 23:15:13,863 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549313863"}]},"ts":"1689549313863"} 2023-07-16 23:15:13,865 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-16 23:15:13,867 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 23:15:13,868 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 386 msec 2023-07-16 23:15:14,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-16 23:15:14,089 INFO [Listener at localhost/40131] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 94 completed 2023-07-16 23:15:14,089 DEBUG [Listener at localhost/40131] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-16 23:15:14,089 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:14,095 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-16 23:15:14,096 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:14,096 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-16 23:15:14,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:14,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 23:15:14,107 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 23:15:14,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 97 2023-07-16 23:15:14,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-16 23:15:14,401 INFO [AsyncFSWAL-0-hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002-prefix:jenkins-hbase4.apache.org,43561,1689549300217] wal.AbstractFSWAL(1141): Slow sync cost: 291 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39013,DS-f0cd7a4e-c855-48a4-9ece-d5b46f489b8e,DISK], DatanodeInfoWithStorage[127.0.0.1:35019,DS-7aac909c-0053-4071-bacc-86c8683b259e,DISK], DatanodeInfoWithStorage[127.0.0.1:39633,DS-cac95491-a5d8-4b6e-8b8f-24240dccb300,DISK]] 2023-07-16 23:15:14,402 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:14,403 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1188213360 2023-07-16 23:15:14,403 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:14,404 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:14,408 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 23:15:14,410 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:14,411 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212 empty. 2023-07-16 23:15:14,412 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:14,412 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-16 23:15:14,428 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-16 23:15:14,429 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 21a5d276c179666f4f8c0e9585fe9212, NAME => 'GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:14,444 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:14,444 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 21a5d276c179666f4f8c0e9585fe9212, disabling compactions & flushes 2023-07-16 23:15:14,444 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. 2023-07-16 23:15:14,445 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. 2023-07-16 23:15:14,445 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. after waiting 0 ms 2023-07-16 23:15:14,445 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. 2023-07-16 23:15:14,445 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. 2023-07-16 23:15:14,445 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 21a5d276c179666f4f8c0e9585fe9212: 2023-07-16 23:15:14,448 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 23:15:14,450 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689549314449"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549314449"}]},"ts":"1689549314449"} 2023-07-16 23:15:14,451 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 23:15:14,453 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 23:15:14,453 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549314453"}]},"ts":"1689549314453"} 2023-07-16 23:15:14,454 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-16 23:15:14,459 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:14,460 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:14,460 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:14,460 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:15:14,460 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:14,460 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=21a5d276c179666f4f8c0e9585fe9212, ASSIGN}] 2023-07-16 23:15:14,462 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=21a5d276c179666f4f8c0e9585fe9212, ASSIGN 2023-07-16 23:15:14,464 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=21a5d276c179666f4f8c0e9585fe9212, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41683,1689549296507; forceNewPlan=false, retain=false 2023-07-16 23:15:14,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-16 23:15:14,614 INFO [jenkins-hbase4:37359] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 23:15:14,616 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=21a5d276c179666f4f8c0e9585fe9212, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:14,616 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689549314616"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549314616"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549314616"}]},"ts":"1689549314616"} 2023-07-16 23:15:14,618 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 21a5d276c179666f4f8c0e9585fe9212, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:14,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-16 23:15:14,775 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. 2023-07-16 23:15:14,775 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 21a5d276c179666f4f8c0e9585fe9212, NAME => 'GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:14,775 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:14,775 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:14,775 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:14,776 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:14,777 INFO [StoreOpener-21a5d276c179666f4f8c0e9585fe9212-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:14,778 DEBUG [StoreOpener-21a5d276c179666f4f8c0e9585fe9212-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212/f 2023-07-16 23:15:14,778 DEBUG [StoreOpener-21a5d276c179666f4f8c0e9585fe9212-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212/f 2023-07-16 23:15:14,779 INFO [StoreOpener-21a5d276c179666f4f8c0e9585fe9212-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 21a5d276c179666f4f8c0e9585fe9212 columnFamilyName f 2023-07-16 23:15:14,780 INFO [StoreOpener-21a5d276c179666f4f8c0e9585fe9212-1] regionserver.HStore(310): Store=21a5d276c179666f4f8c0e9585fe9212/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:14,780 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:14,781 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:14,784 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:14,787 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:14,788 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 21a5d276c179666f4f8c0e9585fe9212; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9432862880, jitterRate=-0.12149618566036224}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:14,788 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 21a5d276c179666f4f8c0e9585fe9212: 2023-07-16 23:15:14,789 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212., pid=99, masterSystemTime=1689549314770 2023-07-16 23:15:14,791 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. 2023-07-16 23:15:14,791 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. 2023-07-16 23:15:14,792 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=21a5d276c179666f4f8c0e9585fe9212, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:14,792 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689549314791"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549314791"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549314791"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549314791"}]},"ts":"1689549314791"} 2023-07-16 23:15:14,796 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-16 23:15:14,796 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 21a5d276c179666f4f8c0e9585fe9212, server=jenkins-hbase4.apache.org,41683,1689549296507 in 176 msec 2023-07-16 23:15:14,798 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-16 23:15:14,798 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=21a5d276c179666f4f8c0e9585fe9212, ASSIGN in 336 msec 2023-07-16 23:15:14,799 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 23:15:14,799 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549314799"}]},"ts":"1689549314799"} 2023-07-16 23:15:14,801 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-16 23:15:14,803 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 23:15:14,805 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 705 msec 2023-07-16 23:15:14,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-16 23:15:14,999 INFO [Listener at localhost/40131] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 97 completed 2023-07-16 23:15:15,000 DEBUG [Listener at localhost/40131] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-16 23:15:15,000 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:15,010 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-16 23:15:15,011 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:15,011 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-16 23:15:15,012 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:15,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-16 23:15:15,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 23:15:15,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-16 23:15:15,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 23:15:15,033 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_1188213360 2023-07-16 23:15:15,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_1188213360 2023-07-16 23:15:15,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:15,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1188213360 2023-07-16 23:15:15,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:15,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:15,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_1188213360 2023-07-16 23:15:15,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(345): Moving region 21a5d276c179666f4f8c0e9585fe9212 to RSGroup Group_testMultiTableMove_1188213360 2023-07-16 23:15:15,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=21a5d276c179666f4f8c0e9585fe9212, REOPEN/MOVE 2023-07-16 23:15:15,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_1188213360 2023-07-16 23:15:15,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(345): Moving region ccd5b343284bb60b014fa4360b1ec243 to RSGroup Group_testMultiTableMove_1188213360 2023-07-16 23:15:15,048 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=21a5d276c179666f4f8c0e9585fe9212, REOPEN/MOVE 2023-07-16 23:15:15,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ccd5b343284bb60b014fa4360b1ec243, REOPEN/MOVE 2023-07-16 23:15:15,050 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=21a5d276c179666f4f8c0e9585fe9212, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:15,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_1188213360, current retry=0 2023-07-16 23:15:15,052 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ccd5b343284bb60b014fa4360b1ec243, REOPEN/MOVE 2023-07-16 23:15:15,052 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689549315050"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549315050"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549315050"}]},"ts":"1689549315050"} 2023-07-16 23:15:15,053 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=ccd5b343284bb60b014fa4360b1ec243, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:15,053 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689549315053"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549315053"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549315053"}]},"ts":"1689549315053"} 2023-07-16 23:15:15,055 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=100, state=RUNNABLE; CloseRegionProcedure 21a5d276c179666f4f8c0e9585fe9212, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:15,061 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=101, state=RUNNABLE; CloseRegionProcedure ccd5b343284bb60b014fa4360b1ec243, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:15,209 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:15,211 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 21a5d276c179666f4f8c0e9585fe9212, disabling compactions & flushes 2023-07-16 23:15:15,211 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. 2023-07-16 23:15:15,211 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. 2023-07-16 23:15:15,211 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. after waiting 0 ms 2023-07-16 23:15:15,211 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. 2023-07-16 23:15:15,220 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:15,221 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. 2023-07-16 23:15:15,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 21a5d276c179666f4f8c0e9585fe9212: 2023-07-16 23:15:15,221 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 21a5d276c179666f4f8c0e9585fe9212 move to jenkins-hbase4.apache.org,33913,1689549296335 record at close sequenceid=2 2023-07-16 23:15:15,222 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:15,223 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:15,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ccd5b343284bb60b014fa4360b1ec243, disabling compactions & flushes 2023-07-16 23:15:15,224 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. 2023-07-16 23:15:15,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. 2023-07-16 23:15:15,224 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=21a5d276c179666f4f8c0e9585fe9212, regionState=CLOSED 2023-07-16 23:15:15,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. after waiting 0 ms 2023-07-16 23:15:15,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. 2023-07-16 23:15:15,224 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689549315224"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549315224"}]},"ts":"1689549315224"} 2023-07-16 23:15:15,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:15,231 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. 2023-07-16 23:15:15,231 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ccd5b343284bb60b014fa4360b1ec243: 2023-07-16 23:15:15,231 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ccd5b343284bb60b014fa4360b1ec243 move to jenkins-hbase4.apache.org,33913,1689549296335 record at close sequenceid=2 2023-07-16 23:15:15,233 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=100 2023-07-16 23:15:15,233 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=100, state=SUCCESS; CloseRegionProcedure 21a5d276c179666f4f8c0e9585fe9212, server=jenkins-hbase4.apache.org,41683,1689549296507 in 171 msec 2023-07-16 23:15:15,235 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:15,236 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=21a5d276c179666f4f8c0e9585fe9212, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33913,1689549296335; forceNewPlan=false, retain=false 2023-07-16 23:15:15,236 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=ccd5b343284bb60b014fa4360b1ec243, regionState=CLOSED 2023-07-16 23:15:15,236 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689549315236"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549315236"}]},"ts":"1689549315236"} 2023-07-16 23:15:15,249 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=101 2023-07-16 23:15:15,249 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=101, state=SUCCESS; CloseRegionProcedure ccd5b343284bb60b014fa4360b1ec243, server=jenkins-hbase4.apache.org,41683,1689549296507 in 181 msec 2023-07-16 23:15:15,250 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ccd5b343284bb60b014fa4360b1ec243, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33913,1689549296335; forceNewPlan=false, retain=false 2023-07-16 23:15:15,321 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-16 23:15:15,387 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=ccd5b343284bb60b014fa4360b1ec243, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:15,387 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=21a5d276c179666f4f8c0e9585fe9212, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:15,387 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689549315386"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549315386"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549315386"}]},"ts":"1689549315386"} 2023-07-16 23:15:15,387 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689549315386"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549315386"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549315386"}]},"ts":"1689549315386"} 2023-07-16 23:15:15,389 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=101, state=RUNNABLE; OpenRegionProcedure ccd5b343284bb60b014fa4360b1ec243, server=jenkins-hbase4.apache.org,33913,1689549296335}] 2023-07-16 23:15:15,393 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=100, state=RUNNABLE; OpenRegionProcedure 21a5d276c179666f4f8c0e9585fe9212, server=jenkins-hbase4.apache.org,33913,1689549296335}] 2023-07-16 23:15:15,543 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. 2023-07-16 23:15:15,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ccd5b343284bb60b014fa4360b1ec243, NAME => 'GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:15,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:15,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:15,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:15,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:15,545 INFO [StoreOpener-ccd5b343284bb60b014fa4360b1ec243-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:15,546 DEBUG [StoreOpener-ccd5b343284bb60b014fa4360b1ec243-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243/f 2023-07-16 23:15:15,546 DEBUG [StoreOpener-ccd5b343284bb60b014fa4360b1ec243-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243/f 2023-07-16 23:15:15,547 INFO [StoreOpener-ccd5b343284bb60b014fa4360b1ec243-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ccd5b343284bb60b014fa4360b1ec243 columnFamilyName f 2023-07-16 23:15:15,547 INFO [StoreOpener-ccd5b343284bb60b014fa4360b1ec243-1] regionserver.HStore(310): Store=ccd5b343284bb60b014fa4360b1ec243/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:15,548 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:15,549 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:15,552 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:15,553 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ccd5b343284bb60b014fa4360b1ec243; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10875223520, jitterRate=0.012834116816520691}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:15,553 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ccd5b343284bb60b014fa4360b1ec243: 2023-07-16 23:15:15,553 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243., pid=104, masterSystemTime=1689549315540 2023-07-16 23:15:15,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. 2023-07-16 23:15:15,555 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. 2023-07-16 23:15:15,555 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. 2023-07-16 23:15:15,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 21a5d276c179666f4f8c0e9585fe9212, NAME => 'GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:15,555 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=ccd5b343284bb60b014fa4360b1ec243, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:15,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:15,555 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689549315555"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549315555"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549315555"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549315555"}]},"ts":"1689549315555"} 2023-07-16 23:15:15,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:15,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:15,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:15,558 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=101 2023-07-16 23:15:15,559 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=101, state=SUCCESS; OpenRegionProcedure ccd5b343284bb60b014fa4360b1ec243, server=jenkins-hbase4.apache.org,33913,1689549296335 in 168 msec 2023-07-16 23:15:15,560 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ccd5b343284bb60b014fa4360b1ec243, REOPEN/MOVE in 510 msec 2023-07-16 23:15:15,562 INFO [StoreOpener-21a5d276c179666f4f8c0e9585fe9212-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:15,563 DEBUG [StoreOpener-21a5d276c179666f4f8c0e9585fe9212-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212/f 2023-07-16 23:15:15,563 DEBUG [StoreOpener-21a5d276c179666f4f8c0e9585fe9212-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212/f 2023-07-16 23:15:15,563 INFO [StoreOpener-21a5d276c179666f4f8c0e9585fe9212-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 21a5d276c179666f4f8c0e9585fe9212 columnFamilyName f 2023-07-16 23:15:15,564 INFO [StoreOpener-21a5d276c179666f4f8c0e9585fe9212-1] regionserver.HStore(310): Store=21a5d276c179666f4f8c0e9585fe9212/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:15,565 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:15,566 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:15,569 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:15,570 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 21a5d276c179666f4f8c0e9585fe9212; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11038944640, jitterRate=0.028081834316253662}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:15,570 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 21a5d276c179666f4f8c0e9585fe9212: 2023-07-16 23:15:15,571 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212., pid=105, masterSystemTime=1689549315540 2023-07-16 23:15:15,572 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. 2023-07-16 23:15:15,572 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. 2023-07-16 23:15:15,573 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=21a5d276c179666f4f8c0e9585fe9212, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:15,573 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689549315572"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549315572"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549315572"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549315572"}]},"ts":"1689549315572"} 2023-07-16 23:15:15,576 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=100 2023-07-16 23:15:15,576 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=100, state=SUCCESS; OpenRegionProcedure 21a5d276c179666f4f8c0e9585fe9212, server=jenkins-hbase4.apache.org,33913,1689549296335 in 181 msec 2023-07-16 23:15:15,578 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=21a5d276c179666f4f8c0e9585fe9212, REOPEN/MOVE in 533 msec 2023-07-16 23:15:16,052 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure.ProcedureSyncWait(216): waitFor pid=100 2023-07-16 23:15:16,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_1188213360. 2023-07-16 23:15:16,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:16,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:16,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:16,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-16 23:15:16,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 23:15:16,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-16 23:15:16,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 23:15:16,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:16,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:16,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1188213360 2023-07-16 23:15:16,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:16,062 INFO [Listener at localhost/40131] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-16 23:15:16,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-16 23:15:16,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 23:15:16,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-16 23:15:16,066 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549316066"}]},"ts":"1689549316066"} 2023-07-16 23:15:16,068 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-16 23:15:16,069 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-16 23:15:16,072 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ccd5b343284bb60b014fa4360b1ec243, UNASSIGN}] 2023-07-16 23:15:16,074 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ccd5b343284bb60b014fa4360b1ec243, UNASSIGN 2023-07-16 23:15:16,074 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=ccd5b343284bb60b014fa4360b1ec243, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:16,074 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689549316074"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549316074"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549316074"}]},"ts":"1689549316074"} 2023-07-16 23:15:16,075 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE; CloseRegionProcedure ccd5b343284bb60b014fa4360b1ec243, server=jenkins-hbase4.apache.org,33913,1689549296335}] 2023-07-16 23:15:16,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-16 23:15:16,227 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:16,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ccd5b343284bb60b014fa4360b1ec243, disabling compactions & flushes 2023-07-16 23:15:16,228 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. 2023-07-16 23:15:16,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. 2023-07-16 23:15:16,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. after waiting 0 ms 2023-07-16 23:15:16,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. 2023-07-16 23:15:16,232 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 23:15:16,233 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243. 2023-07-16 23:15:16,233 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ccd5b343284bb60b014fa4360b1ec243: 2023-07-16 23:15:16,234 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:16,235 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=ccd5b343284bb60b014fa4360b1ec243, regionState=CLOSED 2023-07-16 23:15:16,235 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689549316235"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549316235"}]},"ts":"1689549316235"} 2023-07-16 23:15:16,237 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-16 23:15:16,237 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; CloseRegionProcedure ccd5b343284bb60b014fa4360b1ec243, server=jenkins-hbase4.apache.org,33913,1689549296335 in 161 msec 2023-07-16 23:15:16,239 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-16 23:15:16,239 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ccd5b343284bb60b014fa4360b1ec243, UNASSIGN in 167 msec 2023-07-16 23:15:16,240 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549316240"}]},"ts":"1689549316240"} 2023-07-16 23:15:16,241 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-16 23:15:16,243 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-16 23:15:16,246 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 182 msec 2023-07-16 23:15:16,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-16 23:15:16,369 INFO [Listener at localhost/40131] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-16 23:15:16,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-16 23:15:16,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 23:15:16,372 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 23:15:16,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_1188213360' 2023-07-16 23:15:16,374 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=109, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 23:15:16,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:16,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1188213360 2023-07-16 23:15:16,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:16,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:16,378 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:16,380 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243/f, FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243/recovered.edits] 2023-07-16 23:15:16,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-16 23:15:16,385 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243/recovered.edits/7.seqid to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/archive/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243/recovered.edits/7.seqid 2023-07-16 23:15:16,386 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/GrouptestMultiTableMoveA/ccd5b343284bb60b014fa4360b1ec243 2023-07-16 23:15:16,386 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-16 23:15:16,389 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=109, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 23:15:16,391 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-16 23:15:16,392 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-16 23:15:16,393 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=109, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 23:15:16,393 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-16 23:15:16,394 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549316394"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:16,395 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 23:15:16,395 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ccd5b343284bb60b014fa4360b1ec243, NAME => 'GrouptestMultiTableMoveA,,1689549313480.ccd5b343284bb60b014fa4360b1ec243.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 23:15:16,395 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-16 23:15:16,396 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689549316395"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:16,397 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-16 23:15:16,399 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=109, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 23:15:16,400 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 29 msec 2023-07-16 23:15:16,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-16 23:15:16,483 INFO [Listener at localhost/40131] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-16 23:15:16,483 INFO [Listener at localhost/40131] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-16 23:15:16,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-16 23:15:16,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 23:15:16,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-16 23:15:16,488 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549316488"}]},"ts":"1689549316488"} 2023-07-16 23:15:16,489 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'GrouptestMultiTableMoveB' 2023-07-16 23:15:16,490 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-16 23:15:16,491 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-16 23:15:16,492 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=21a5d276c179666f4f8c0e9585fe9212, UNASSIGN}] 2023-07-16 23:15:16,494 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=21a5d276c179666f4f8c0e9585fe9212, UNASSIGN 2023-07-16 23:15:16,495 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=21a5d276c179666f4f8c0e9585fe9212, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:16,495 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689549316495"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549316495"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549316495"}]},"ts":"1689549316495"} 2023-07-16 23:15:16,498 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure 21a5d276c179666f4f8c0e9585fe9212, server=jenkins-hbase4.apache.org,33913,1689549296335}] 2023-07-16 23:15:16,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-16 23:15:16,650 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:16,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 21a5d276c179666f4f8c0e9585fe9212, disabling compactions & flushes 2023-07-16 23:15:16,651 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. 2023-07-16 23:15:16,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. 2023-07-16 23:15:16,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. after waiting 0 ms 2023-07-16 23:15:16,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. 2023-07-16 23:15:16,655 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 23:15:16,655 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212. 2023-07-16 23:15:16,655 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 21a5d276c179666f4f8c0e9585fe9212: 2023-07-16 23:15:16,657 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:16,657 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=21a5d276c179666f4f8c0e9585fe9212, regionState=CLOSED 2023-07-16 23:15:16,657 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689549316657"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549316657"}]},"ts":"1689549316657"} 2023-07-16 23:15:16,660 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-16 23:15:16,660 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure 21a5d276c179666f4f8c0e9585fe9212, server=jenkins-hbase4.apache.org,33913,1689549296335 in 161 msec 2023-07-16 23:15:16,662 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-16 23:15:16,662 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=21a5d276c179666f4f8c0e9585fe9212, UNASSIGN in 168 msec 2023-07-16 23:15:16,663 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549316662"}]},"ts":"1689549316662"} 2023-07-16 23:15:16,664 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-16 23:15:16,665 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-16 23:15:16,667 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 182 msec 2023-07-16 23:15:16,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-16 23:15:16,791 INFO [Listener at localhost/40131] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-16 23:15:16,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-16 23:15:16,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 23:15:16,795 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 23:15:16,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_1188213360' 2023-07-16 23:15:16,796 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 23:15:16,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:16,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1188213360 2023-07-16 23:15:16,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:16,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:16,802 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:16,805 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212/f, FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212/recovered.edits] 2023-07-16 23:15:16,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-16 23:15:16,813 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212/recovered.edits/7.seqid to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/archive/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212/recovered.edits/7.seqid 2023-07-16 23:15:16,814 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/GrouptestMultiTableMoveB/21a5d276c179666f4f8c0e9585fe9212 2023-07-16 23:15:16,814 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-16 23:15:16,818 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 23:15:16,822 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-16 23:15:16,824 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-16 23:15:16,825 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 23:15:16,825 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-16 23:15:16,825 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549316825"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:16,827 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 23:15:16,827 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 21a5d276c179666f4f8c0e9585fe9212, NAME => 'GrouptestMultiTableMoveB,,1689549314098.21a5d276c179666f4f8c0e9585fe9212.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 23:15:16,827 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-16 23:15:16,827 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689549316827"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:16,829 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-16 23:15:16,831 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 23:15:16,834 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 39 msec 2023-07-16 23:15:16,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-16 23:15:16,913 INFO [Listener at localhost/40131] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-16 23:15:16,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:16,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:16,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:16,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:16,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:16,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33913] to rsgroup default 2023-07-16 23:15:16,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:16,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1188213360 2023-07-16 23:15:16,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:16,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:16,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_1188213360, current retry=0 2023-07-16 23:15:16,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33913,1689549296335] are moved back to Group_testMultiTableMove_1188213360 2023-07-16 23:15:16,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_1188213360 => default 2023-07-16 23:15:16,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:16,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_1188213360 2023-07-16 23:15:16,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:16,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:16,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 23:15:16,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:16,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:16,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:16,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:16,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:16,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:16,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:16,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:16,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:16,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:16,948 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:16,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:16,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:16,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:16,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:16,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:17,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:17,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:17,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37359] to rsgroup master 2023-07-16 23:15:17,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:17,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 512 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42846 deadline: 1689550517007, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. 2023-07-16 23:15:17,008 WARN [Listener at localhost/40131] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:17,010 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:17,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:17,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:17,011 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989, jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:43561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:17,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:17,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:17,031 INFO [Listener at localhost/40131] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=512 (was 509) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/cluster_b14fde1a-1c3e-bdee-d7b9-5694b71ef229/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-22 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-21 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_290175226_17 at /127.0.0.1:50198 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x29a77039-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/cluster_b14fde1a-1c3e-bdee-d7b9-5694b71ef229/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/cluster_b14fde1a-1c3e-bdee-d7b9-5694b71ef229/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/cluster_b14fde1a-1c3e-bdee-d7b9-5694b71ef229/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-184735455_17 at /127.0.0.1:43706 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-184735455_17 at /127.0.0.1:43200 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cf74ee0-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cf74ee0-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x29a77039-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=774 (was 808), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=446 (was 424) - SystemLoadAverage LEAK? -, ProcessCount=176 (was 176), AvailableMemoryMB=2801 (was 2994) 2023-07-16 23:15:17,031 WARN [Listener at localhost/40131] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-16 23:15:17,048 INFO [Listener at localhost/40131] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=512, OpenFileDescriptor=774, MaxFileDescriptor=60000, SystemLoadAverage=446, ProcessCount=176, AvailableMemoryMB=2800 2023-07-16 23:15:17,048 WARN [Listener at localhost/40131] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-16 23:15:17,048 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-16 23:15:17,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:17,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:17,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:17,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:17,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:17,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:17,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:17,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:17,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:17,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:17,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:17,062 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:17,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:17,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:17,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:17,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:17,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:17,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:17,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:17,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37359] to rsgroup master 2023-07-16 23:15:17,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:17,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 540 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42846 deadline: 1689550517072, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. 2023-07-16 23:15:17,072 WARN [Listener at localhost/40131] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:17,074 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:17,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:17,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:17,075 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989, jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:43561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:17,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:17,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:17,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:17,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:17,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-16 23:15:17,079 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:17,079 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 23:15:17,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:17,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:17,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:17,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:17,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:17,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989] to rsgroup oldGroup 2023-07-16 23:15:17,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:17,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 23:15:17,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:17,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:17,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 23:15:17,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33913,1689549296335, jenkins-hbase4.apache.org,38989,1689549296125] are moved back to default 2023-07-16 23:15:17,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-16 23:15:17,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:17,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:17,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:17,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-16 23:15:17,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:17,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-16 23:15:17,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:17,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:17,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:17,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-16 23:15:17,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:17,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-16 23:15:17,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 23:15:17,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:17,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 23:15:17,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:17,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:17,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:17,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41683] to rsgroup anotherRSGroup 2023-07-16 23:15:17,113 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:17,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-16 23:15:17,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 23:15:17,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:17,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 23:15:17,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 23:15:17,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41683,1689549296507] are moved back to default 2023-07-16 23:15:17,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-16 23:15:17,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:17,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:17,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:17,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-16 23:15:17,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:17,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-16 23:15:17,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:17,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-16 23:15:17,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:17,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:42846 deadline: 1689550517126, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-16 23:15:17,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-16 23:15:17,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:17,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:42846 deadline: 1689550517128, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-16 23:15:17,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-16 23:15:17,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:17,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 578 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:42846 deadline: 1689550517129, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-16 23:15:17,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-16 23:15:17,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:17,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 580 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:42846 deadline: 1689550517129, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-16 23:15:17,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:17,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:17,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:17,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:17,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:17,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41683] to rsgroup default 2023-07-16 23:15:17,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:17,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-16 23:15:17,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 23:15:17,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:17,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 23:15:17,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-16 23:15:17,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41683,1689549296507] are moved back to anotherRSGroup 2023-07-16 23:15:17,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-16 23:15:17,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:17,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-16 23:15:17,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:17,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 23:15:17,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:17,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-16 23:15:17,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:17,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:17,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:17,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:17,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989] to rsgroup default 2023-07-16 23:15:17,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:17,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 23:15:17,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:17,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:17,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-16 23:15:17,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33913,1689549296335, jenkins-hbase4.apache.org,38989,1689549296125] are moved back to oldGroup 2023-07-16 23:15:17,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-16 23:15:17,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:17,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-16 23:15:17,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:17,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:17,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 23:15:17,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:17,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:17,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:17,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:17,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:17,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:17,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:17,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:17,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:17,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:17,167 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:17,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:17,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:17,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:17,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:17,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:17,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:17,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:17,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37359] to rsgroup master 2023-07-16 23:15:17,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:17,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 616 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42846 deadline: 1689550517177, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. 2023-07-16 23:15:17,178 WARN [Listener at localhost/40131] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:17,179 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:17,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:17,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:17,180 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989, jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:43561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:17,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:17,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:17,198 INFO [Listener at localhost/40131] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=515 (was 512) Potentially hanging thread: hconnection-0x2cf74ee0-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cf74ee0-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cf74ee0-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cf74ee0-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=774 (was 774), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=446 (was 446), ProcessCount=176 (was 176), AvailableMemoryMB=2807 (was 2800) - AvailableMemoryMB LEAK? - 2023-07-16 23:15:17,198 WARN [Listener at localhost/40131] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-16 23:15:17,216 INFO [Listener at localhost/40131] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=515, OpenFileDescriptor=774, MaxFileDescriptor=60000, SystemLoadAverage=446, ProcessCount=176, AvailableMemoryMB=2806 2023-07-16 23:15:17,216 WARN [Listener at localhost/40131] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-16 23:15:17,216 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-16 23:15:17,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:17,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:17,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:17,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:17,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:17,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:17,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:17,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:17,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:17,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:17,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:17,231 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:17,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:17,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:17,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:17,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:17,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:17,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:17,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:17,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37359] to rsgroup master 2023-07-16 23:15:17,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:17,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 644 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42846 deadline: 1689550517240, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. 2023-07-16 23:15:17,241 WARN [Listener at localhost/40131] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:17,242 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:17,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:17,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:17,243 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989, jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:43561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:17,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:17,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:17,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:17,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:17,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-16 23:15:17,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 23:15:17,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:17,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:17,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:17,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:17,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:17,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:17,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989] to rsgroup oldgroup 2023-07-16 23:15:17,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 23:15:17,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:17,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:17,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:17,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 23:15:17,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33913,1689549296335, jenkins-hbase4.apache.org,38989,1689549296125] are moved back to default 2023-07-16 23:15:17,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-16 23:15:17,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:17,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:17,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:17,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-16 23:15:17,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:17,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:17,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-16 23:15:17,274 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 23:15:17,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 114 2023-07-16 23:15:17,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-16 23:15:17,276 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 23:15:17,276 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:17,277 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:17,277 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:17,280 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 23:15:17,281 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/testRename/f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:17,282 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/testRename/f8c9eb4dc8325188c8ee7648ac1d3697 empty. 2023-07-16 23:15:17,282 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/testRename/f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:17,282 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-16 23:15:17,296 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-16 23:15:17,298 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => f8c9eb4dc8325188c8ee7648ac1d3697, NAME => 'testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:17,318 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:17,318 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing f8c9eb4dc8325188c8ee7648ac1d3697, disabling compactions & flushes 2023-07-16 23:15:17,318 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:17,319 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:17,319 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. after waiting 0 ms 2023-07-16 23:15:17,319 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:17,319 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:17,319 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for f8c9eb4dc8325188c8ee7648ac1d3697: 2023-07-16 23:15:17,321 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 23:15:17,322 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689549317322"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549317322"}]},"ts":"1689549317322"} 2023-07-16 23:15:17,324 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 23:15:17,324 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 23:15:17,325 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549317325"}]},"ts":"1689549317325"} 2023-07-16 23:15:17,326 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-16 23:15:17,329 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:17,329 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:17,329 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:17,329 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:17,329 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=f8c9eb4dc8325188c8ee7648ac1d3697, ASSIGN}] 2023-07-16 23:15:17,331 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=f8c9eb4dc8325188c8ee7648ac1d3697, ASSIGN 2023-07-16 23:15:17,332 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=f8c9eb4dc8325188c8ee7648ac1d3697, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43561,1689549300217; forceNewPlan=false, retain=false 2023-07-16 23:15:17,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-16 23:15:17,482 INFO [jenkins-hbase4:37359] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 23:15:17,484 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=f8c9eb4dc8325188c8ee7648ac1d3697, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:17,484 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689549317483"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549317483"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549317483"}]},"ts":"1689549317483"} 2023-07-16 23:15:17,485 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure f8c9eb4dc8325188c8ee7648ac1d3697, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:17,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-16 23:15:17,641 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:17,641 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f8c9eb4dc8325188c8ee7648ac1d3697, NAME => 'testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:17,641 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:17,641 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:17,641 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:17,641 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:17,643 INFO [StoreOpener-f8c9eb4dc8325188c8ee7648ac1d3697-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:17,644 DEBUG [StoreOpener-f8c9eb4dc8325188c8ee7648ac1d3697-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/testRename/f8c9eb4dc8325188c8ee7648ac1d3697/tr 2023-07-16 23:15:17,644 DEBUG [StoreOpener-f8c9eb4dc8325188c8ee7648ac1d3697-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/testRename/f8c9eb4dc8325188c8ee7648ac1d3697/tr 2023-07-16 23:15:17,645 INFO [StoreOpener-f8c9eb4dc8325188c8ee7648ac1d3697-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f8c9eb4dc8325188c8ee7648ac1d3697 columnFamilyName tr 2023-07-16 23:15:17,645 INFO [StoreOpener-f8c9eb4dc8325188c8ee7648ac1d3697-1] regionserver.HStore(310): Store=f8c9eb4dc8325188c8ee7648ac1d3697/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:17,646 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/testRename/f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:17,646 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/testRename/f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:17,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:17,651 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/testRename/f8c9eb4dc8325188c8ee7648ac1d3697/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:17,652 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f8c9eb4dc8325188c8ee7648ac1d3697; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10284362720, jitterRate=-0.04219408333301544}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:17,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f8c9eb4dc8325188c8ee7648ac1d3697: 2023-07-16 23:15:17,653 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697., pid=116, masterSystemTime=1689549317637 2023-07-16 23:15:17,654 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:17,654 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:17,655 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=f8c9eb4dc8325188c8ee7648ac1d3697, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:17,655 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689549317654"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549317654"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549317654"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549317654"}]},"ts":"1689549317654"} 2023-07-16 23:15:17,658 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-16 23:15:17,658 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure f8c9eb4dc8325188c8ee7648ac1d3697, server=jenkins-hbase4.apache.org,43561,1689549300217 in 171 msec 2023-07-16 23:15:17,660 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-16 23:15:17,660 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=f8c9eb4dc8325188c8ee7648ac1d3697, ASSIGN in 329 msec 2023-07-16 23:15:17,660 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 23:15:17,661 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549317661"}]},"ts":"1689549317661"} 2023-07-16 23:15:17,662 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-16 23:15:17,664 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 23:15:17,666 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=testRename in 392 msec 2023-07-16 23:15:17,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-16 23:15:17,878 INFO [Listener at localhost/40131] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 114 completed 2023-07-16 23:15:17,879 DEBUG [Listener at localhost/40131] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-16 23:15:17,879 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:17,883 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-16 23:15:17,883 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:17,883 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-16 23:15:17,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-16 23:15:17,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 23:15:17,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:17,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:17,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:17,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-16 23:15:17,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(345): Moving region f8c9eb4dc8325188c8ee7648ac1d3697 to RSGroup oldgroup 2023-07-16 23:15:17,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:17,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:17,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:17,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:15:17,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:17,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f8c9eb4dc8325188c8ee7648ac1d3697, REOPEN/MOVE 2023-07-16 23:15:17,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-16 23:15:17,893 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f8c9eb4dc8325188c8ee7648ac1d3697, REOPEN/MOVE 2023-07-16 23:15:17,893 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=f8c9eb4dc8325188c8ee7648ac1d3697, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:17,893 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689549317893"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549317893"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549317893"}]},"ts":"1689549317893"} 2023-07-16 23:15:17,895 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure f8c9eb4dc8325188c8ee7648ac1d3697, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:18,048 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:18,049 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f8c9eb4dc8325188c8ee7648ac1d3697, disabling compactions & flushes 2023-07-16 23:15:18,049 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:18,049 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:18,049 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. after waiting 0 ms 2023-07-16 23:15:18,049 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:18,053 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/testRename/f8c9eb4dc8325188c8ee7648ac1d3697/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:18,054 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:18,054 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f8c9eb4dc8325188c8ee7648ac1d3697: 2023-07-16 23:15:18,054 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f8c9eb4dc8325188c8ee7648ac1d3697 move to jenkins-hbase4.apache.org,38989,1689549296125 record at close sequenceid=2 2023-07-16 23:15:18,055 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:18,056 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=f8c9eb4dc8325188c8ee7648ac1d3697, regionState=CLOSED 2023-07-16 23:15:18,056 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689549318056"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549318056"}]},"ts":"1689549318056"} 2023-07-16 23:15:18,059 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-16 23:15:18,059 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure f8c9eb4dc8325188c8ee7648ac1d3697, server=jenkins-hbase4.apache.org,43561,1689549300217 in 162 msec 2023-07-16 23:15:18,060 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=f8c9eb4dc8325188c8ee7648ac1d3697, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38989,1689549296125; forceNewPlan=false, retain=false 2023-07-16 23:15:18,210 INFO [jenkins-hbase4:37359] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 23:15:18,210 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=f8c9eb4dc8325188c8ee7648ac1d3697, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:18,210 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689549318210"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549318210"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549318210"}]},"ts":"1689549318210"} 2023-07-16 23:15:18,212 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure f8c9eb4dc8325188c8ee7648ac1d3697, server=jenkins-hbase4.apache.org,38989,1689549296125}] 2023-07-16 23:15:18,368 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:18,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f8c9eb4dc8325188c8ee7648ac1d3697, NAME => 'testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:18,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:18,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:18,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:18,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:18,371 INFO [StoreOpener-f8c9eb4dc8325188c8ee7648ac1d3697-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:18,372 DEBUG [StoreOpener-f8c9eb4dc8325188c8ee7648ac1d3697-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/testRename/f8c9eb4dc8325188c8ee7648ac1d3697/tr 2023-07-16 23:15:18,372 DEBUG [StoreOpener-f8c9eb4dc8325188c8ee7648ac1d3697-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/testRename/f8c9eb4dc8325188c8ee7648ac1d3697/tr 2023-07-16 23:15:18,372 INFO [StoreOpener-f8c9eb4dc8325188c8ee7648ac1d3697-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f8c9eb4dc8325188c8ee7648ac1d3697 columnFamilyName tr 2023-07-16 23:15:18,373 INFO [StoreOpener-f8c9eb4dc8325188c8ee7648ac1d3697-1] regionserver.HStore(310): Store=f8c9eb4dc8325188c8ee7648ac1d3697/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:18,374 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/testRename/f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:18,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/testRename/f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:18,379 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:18,380 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f8c9eb4dc8325188c8ee7648ac1d3697; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10576819360, jitterRate=-0.014956936240196228}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:18,380 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f8c9eb4dc8325188c8ee7648ac1d3697: 2023-07-16 23:15:18,381 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697., pid=119, masterSystemTime=1689549318364 2023-07-16 23:15:18,382 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:18,382 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:18,383 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=f8c9eb4dc8325188c8ee7648ac1d3697, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:18,383 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689549318383"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549318383"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549318383"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549318383"}]},"ts":"1689549318383"} 2023-07-16 23:15:18,386 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-16 23:15:18,386 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure f8c9eb4dc8325188c8ee7648ac1d3697, server=jenkins-hbase4.apache.org,38989,1689549296125 in 172 msec 2023-07-16 23:15:18,387 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=f8c9eb4dc8325188c8ee7648ac1d3697, REOPEN/MOVE in 494 msec 2023-07-16 23:15:18,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-16 23:15:18,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-16 23:15:18,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:18,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:18,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:18,900 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:18,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-16 23:15:18,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 23:15:18,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-16 23:15:18,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:18,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-16 23:15:18,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 23:15:18,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:18,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:18,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-16 23:15:18,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 23:15:18,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 23:15:18,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:18,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:18,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 23:15:18,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:18,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:18,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:18,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41683] to rsgroup normal 2023-07-16 23:15:18,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 23:15:18,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 23:15:18,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:18,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:18,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 23:15:18,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 23:15:18,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41683,1689549296507] are moved back to default 2023-07-16 23:15:18,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-16 23:15:18,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:18,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:18,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:18,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-16 23:15:18,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:18,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:18,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-16 23:15:18,937 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 23:15:18,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 120 2023-07-16 23:15:18,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-16 23:15:18,939 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 23:15:18,939 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 23:15:18,940 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:18,940 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:18,942 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 23:15:18,944 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 23:15:18,945 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/unmovedTable/dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:18,946 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/unmovedTable/dee4450ec086e99bcaec16c3a6848eb5 empty. 2023-07-16 23:15:18,947 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/unmovedTable/dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:18,947 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-16 23:15:18,965 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-16 23:15:18,966 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => dee4450ec086e99bcaec16c3a6848eb5, NAME => 'unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:18,982 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:18,982 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing dee4450ec086e99bcaec16c3a6848eb5, disabling compactions & flushes 2023-07-16 23:15:18,982 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:18,982 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:18,982 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. after waiting 0 ms 2023-07-16 23:15:18,982 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:18,982 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:18,982 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for dee4450ec086e99bcaec16c3a6848eb5: 2023-07-16 23:15:18,986 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 23:15:18,987 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689549318987"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549318987"}]},"ts":"1689549318987"} 2023-07-16 23:15:18,989 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 23:15:18,990 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 23:15:18,990 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549318990"}]},"ts":"1689549318990"} 2023-07-16 23:15:18,992 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-16 23:15:18,995 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=dee4450ec086e99bcaec16c3a6848eb5, ASSIGN}] 2023-07-16 23:15:18,997 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=dee4450ec086e99bcaec16c3a6848eb5, ASSIGN 2023-07-16 23:15:18,998 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=dee4450ec086e99bcaec16c3a6848eb5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43561,1689549300217; forceNewPlan=false, retain=false 2023-07-16 23:15:19,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-16 23:15:19,149 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=dee4450ec086e99bcaec16c3a6848eb5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:19,150 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689549319149"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549319149"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549319149"}]},"ts":"1689549319149"} 2023-07-16 23:15:19,151 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure dee4450ec086e99bcaec16c3a6848eb5, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:19,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-16 23:15:19,307 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:19,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dee4450ec086e99bcaec16c3a6848eb5, NAME => 'unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:19,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:19,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:19,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:19,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:19,309 INFO [StoreOpener-dee4450ec086e99bcaec16c3a6848eb5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:19,310 DEBUG [StoreOpener-dee4450ec086e99bcaec16c3a6848eb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/unmovedTable/dee4450ec086e99bcaec16c3a6848eb5/ut 2023-07-16 23:15:19,310 DEBUG [StoreOpener-dee4450ec086e99bcaec16c3a6848eb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/unmovedTable/dee4450ec086e99bcaec16c3a6848eb5/ut 2023-07-16 23:15:19,311 INFO [StoreOpener-dee4450ec086e99bcaec16c3a6848eb5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dee4450ec086e99bcaec16c3a6848eb5 columnFamilyName ut 2023-07-16 23:15:19,311 INFO [StoreOpener-dee4450ec086e99bcaec16c3a6848eb5-1] regionserver.HStore(310): Store=dee4450ec086e99bcaec16c3a6848eb5/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:19,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/unmovedTable/dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:19,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/unmovedTable/dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:19,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:19,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/unmovedTable/dee4450ec086e99bcaec16c3a6848eb5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:19,321 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dee4450ec086e99bcaec16c3a6848eb5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11786191520, jitterRate=0.09767462313175201}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:19,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dee4450ec086e99bcaec16c3a6848eb5: 2023-07-16 23:15:19,322 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5., pid=122, masterSystemTime=1689549319303 2023-07-16 23:15:19,324 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:19,324 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:19,324 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=dee4450ec086e99bcaec16c3a6848eb5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:19,324 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689549319324"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549319324"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549319324"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549319324"}]},"ts":"1689549319324"} 2023-07-16 23:15:19,327 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-16 23:15:19,327 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure dee4450ec086e99bcaec16c3a6848eb5, server=jenkins-hbase4.apache.org,43561,1689549300217 in 175 msec 2023-07-16 23:15:19,329 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-16 23:15:19,329 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=dee4450ec086e99bcaec16c3a6848eb5, ASSIGN in 332 msec 2023-07-16 23:15:19,329 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 23:15:19,330 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549319330"}]},"ts":"1689549319330"} 2023-07-16 23:15:19,331 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-16 23:15:19,334 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 23:15:19,335 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=unmovedTable in 400 msec 2023-07-16 23:15:19,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-16 23:15:19,541 INFO [Listener at localhost/40131] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 120 completed 2023-07-16 23:15:19,541 DEBUG [Listener at localhost/40131] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-16 23:15:19,541 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:19,545 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-16 23:15:19,545 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:19,545 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-16 23:15:19,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-16 23:15:19,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 23:15:19,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 23:15:19,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:19,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:19,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 23:15:19,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-16 23:15:19,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(345): Moving region dee4450ec086e99bcaec16c3a6848eb5 to RSGroup normal 2023-07-16 23:15:19,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=dee4450ec086e99bcaec16c3a6848eb5, REOPEN/MOVE 2023-07-16 23:15:19,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-16 23:15:19,556 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=dee4450ec086e99bcaec16c3a6848eb5, REOPEN/MOVE 2023-07-16 23:15:19,557 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=dee4450ec086e99bcaec16c3a6848eb5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:19,557 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689549319557"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549319557"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549319557"}]},"ts":"1689549319557"} 2023-07-16 23:15:19,559 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure dee4450ec086e99bcaec16c3a6848eb5, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:19,710 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:19,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dee4450ec086e99bcaec16c3a6848eb5, disabling compactions & flushes 2023-07-16 23:15:19,712 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:19,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:19,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. after waiting 0 ms 2023-07-16 23:15:19,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:19,718 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/unmovedTable/dee4450ec086e99bcaec16c3a6848eb5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:19,719 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:19,719 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dee4450ec086e99bcaec16c3a6848eb5: 2023-07-16 23:15:19,719 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding dee4450ec086e99bcaec16c3a6848eb5 move to jenkins-hbase4.apache.org,41683,1689549296507 record at close sequenceid=2 2023-07-16 23:15:19,722 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:19,723 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=dee4450ec086e99bcaec16c3a6848eb5, regionState=CLOSED 2023-07-16 23:15:19,723 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689549319723"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549319723"}]},"ts":"1689549319723"} 2023-07-16 23:15:19,725 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-16 23:15:19,725 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure dee4450ec086e99bcaec16c3a6848eb5, server=jenkins-hbase4.apache.org,43561,1689549300217 in 165 msec 2023-07-16 23:15:19,726 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=dee4450ec086e99bcaec16c3a6848eb5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41683,1689549296507; forceNewPlan=false, retain=false 2023-07-16 23:15:19,877 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=dee4450ec086e99bcaec16c3a6848eb5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:19,877 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689549319877"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549319877"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549319877"}]},"ts":"1689549319877"} 2023-07-16 23:15:19,879 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure dee4450ec086e99bcaec16c3a6848eb5, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:20,035 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:20,035 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dee4450ec086e99bcaec16c3a6848eb5, NAME => 'unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:20,036 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:20,036 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:20,036 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:20,036 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:20,037 INFO [StoreOpener-dee4450ec086e99bcaec16c3a6848eb5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:20,038 DEBUG [StoreOpener-dee4450ec086e99bcaec16c3a6848eb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/unmovedTable/dee4450ec086e99bcaec16c3a6848eb5/ut 2023-07-16 23:15:20,038 DEBUG [StoreOpener-dee4450ec086e99bcaec16c3a6848eb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/unmovedTable/dee4450ec086e99bcaec16c3a6848eb5/ut 2023-07-16 23:15:20,038 INFO [StoreOpener-dee4450ec086e99bcaec16c3a6848eb5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dee4450ec086e99bcaec16c3a6848eb5 columnFamilyName ut 2023-07-16 23:15:20,039 INFO [StoreOpener-dee4450ec086e99bcaec16c3a6848eb5-1] regionserver.HStore(310): Store=dee4450ec086e99bcaec16c3a6848eb5/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:20,040 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/unmovedTable/dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:20,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/unmovedTable/dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:20,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:20,044 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dee4450ec086e99bcaec16c3a6848eb5; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10270401280, jitterRate=-0.043494343757629395}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:20,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dee4450ec086e99bcaec16c3a6848eb5: 2023-07-16 23:15:20,045 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5., pid=125, masterSystemTime=1689549320031 2023-07-16 23:15:20,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:20,046 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:20,047 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=dee4450ec086e99bcaec16c3a6848eb5, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:20,047 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689549320047"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549320047"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549320047"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549320047"}]},"ts":"1689549320047"} 2023-07-16 23:15:20,049 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-16 23:15:20,049 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure dee4450ec086e99bcaec16c3a6848eb5, server=jenkins-hbase4.apache.org,41683,1689549296507 in 169 msec 2023-07-16 23:15:20,050 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=dee4450ec086e99bcaec16c3a6848eb5, REOPEN/MOVE in 493 msec 2023-07-16 23:15:20,277 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-16 23:15:20,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-16 23:15:20,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-16 23:15:20,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:20,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:20,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:20,568 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:20,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-16 23:15:20,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 23:15:20,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-16 23:15:20,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:20,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-16 23:15:20,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 23:15:20,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-16 23:15:20,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 23:15:20,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:20,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:20,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 23:15:20,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-16 23:15:20,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-16 23:15:20,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:20,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:20,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-16 23:15:20,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:20,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-16 23:15:20,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 23:15:20,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-16 23:15:20,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 23:15:20,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:20,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:20,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-16 23:15:20,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 23:15:20,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:20,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:20,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 23:15:20,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 23:15:20,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-16 23:15:20,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(345): Moving region dee4450ec086e99bcaec16c3a6848eb5 to RSGroup default 2023-07-16 23:15:20,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=dee4450ec086e99bcaec16c3a6848eb5, REOPEN/MOVE 2023-07-16 23:15:20,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-16 23:15:20,605 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=dee4450ec086e99bcaec16c3a6848eb5, REOPEN/MOVE 2023-07-16 23:15:20,605 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=dee4450ec086e99bcaec16c3a6848eb5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:20,606 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689549320605"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549320605"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549320605"}]},"ts":"1689549320605"} 2023-07-16 23:15:20,607 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure dee4450ec086e99bcaec16c3a6848eb5, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:20,760 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:20,761 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dee4450ec086e99bcaec16c3a6848eb5, disabling compactions & flushes 2023-07-16 23:15:20,761 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:20,761 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:20,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. after waiting 0 ms 2023-07-16 23:15:20,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:20,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/unmovedTable/dee4450ec086e99bcaec16c3a6848eb5/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 23:15:20,768 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:20,769 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dee4450ec086e99bcaec16c3a6848eb5: 2023-07-16 23:15:20,769 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding dee4450ec086e99bcaec16c3a6848eb5 move to jenkins-hbase4.apache.org,43561,1689549300217 record at close sequenceid=5 2023-07-16 23:15:20,772 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=dee4450ec086e99bcaec16c3a6848eb5, regionState=CLOSED 2023-07-16 23:15:20,772 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689549320772"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549320772"}]},"ts":"1689549320772"} 2023-07-16 23:15:20,772 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:20,777 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-16 23:15:20,777 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure dee4450ec086e99bcaec16c3a6848eb5, server=jenkins-hbase4.apache.org,41683,1689549296507 in 167 msec 2023-07-16 23:15:20,778 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=dee4450ec086e99bcaec16c3a6848eb5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43561,1689549300217; forceNewPlan=false, retain=false 2023-07-16 23:15:20,928 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=dee4450ec086e99bcaec16c3a6848eb5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:20,928 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689549320928"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549320928"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549320928"}]},"ts":"1689549320928"} 2023-07-16 23:15:20,930 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure dee4450ec086e99bcaec16c3a6848eb5, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:21,087 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:21,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dee4450ec086e99bcaec16c3a6848eb5, NAME => 'unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:21,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:21,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:21,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:21,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:21,088 INFO [StoreOpener-dee4450ec086e99bcaec16c3a6848eb5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:21,089 DEBUG [StoreOpener-dee4450ec086e99bcaec16c3a6848eb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/unmovedTable/dee4450ec086e99bcaec16c3a6848eb5/ut 2023-07-16 23:15:21,089 DEBUG [StoreOpener-dee4450ec086e99bcaec16c3a6848eb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/unmovedTable/dee4450ec086e99bcaec16c3a6848eb5/ut 2023-07-16 23:15:21,090 INFO [StoreOpener-dee4450ec086e99bcaec16c3a6848eb5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dee4450ec086e99bcaec16c3a6848eb5 columnFamilyName ut 2023-07-16 23:15:21,090 INFO [StoreOpener-dee4450ec086e99bcaec16c3a6848eb5-1] regionserver.HStore(310): Store=dee4450ec086e99bcaec16c3a6848eb5/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:21,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/unmovedTable/dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:21,092 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/unmovedTable/dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:21,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:21,095 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dee4450ec086e99bcaec16c3a6848eb5; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10190369120, jitterRate=-0.050947919487953186}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:21,095 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dee4450ec086e99bcaec16c3a6848eb5: 2023-07-16 23:15:21,096 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5., pid=128, masterSystemTime=1689549321083 2023-07-16 23:15:21,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:21,097 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:21,097 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=dee4450ec086e99bcaec16c3a6848eb5, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:21,098 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689549321097"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549321097"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549321097"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549321097"}]},"ts":"1689549321097"} 2023-07-16 23:15:21,100 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-16 23:15:21,100 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure dee4450ec086e99bcaec16c3a6848eb5, server=jenkins-hbase4.apache.org,43561,1689549300217 in 169 msec 2023-07-16 23:15:21,101 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=dee4450ec086e99bcaec16c3a6848eb5, REOPEN/MOVE in 496 msec 2023-07-16 23:15:21,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-16 23:15:21,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-16 23:15:21,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:21,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41683] to rsgroup default 2023-07-16 23:15:21,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 23:15:21,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:21,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:21,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 23:15:21,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 23:15:21,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-16 23:15:21,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41683,1689549296507] are moved back to normal 2023-07-16 23:15:21,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-16 23:15:21,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:21,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-16 23:15:21,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:21,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:21,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 23:15:21,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-16 23:15:21,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:21,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:21,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:21,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:21,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:21,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:21,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:21,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:21,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 23:15:21,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 23:15:21,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:21,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-16 23:15:21,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:21,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 23:15:21,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:21,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-16 23:15:21,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(345): Moving region f8c9eb4dc8325188c8ee7648ac1d3697 to RSGroup default 2023-07-16 23:15:21,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f8c9eb4dc8325188c8ee7648ac1d3697, REOPEN/MOVE 2023-07-16 23:15:21,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-16 23:15:21,635 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f8c9eb4dc8325188c8ee7648ac1d3697, REOPEN/MOVE 2023-07-16 23:15:21,636 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=f8c9eb4dc8325188c8ee7648ac1d3697, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:21,636 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689549321636"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549321636"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549321636"}]},"ts":"1689549321636"} 2023-07-16 23:15:21,637 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure f8c9eb4dc8325188c8ee7648ac1d3697, server=jenkins-hbase4.apache.org,38989,1689549296125}] 2023-07-16 23:15:21,791 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:21,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f8c9eb4dc8325188c8ee7648ac1d3697, disabling compactions & flushes 2023-07-16 23:15:21,792 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:21,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:21,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. after waiting 0 ms 2023-07-16 23:15:21,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:21,796 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/testRename/f8c9eb4dc8325188c8ee7648ac1d3697/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 23:15:21,798 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:21,798 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f8c9eb4dc8325188c8ee7648ac1d3697: 2023-07-16 23:15:21,798 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f8c9eb4dc8325188c8ee7648ac1d3697 move to jenkins-hbase4.apache.org,41683,1689549296507 record at close sequenceid=5 2023-07-16 23:15:21,799 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:21,800 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=f8c9eb4dc8325188c8ee7648ac1d3697, regionState=CLOSED 2023-07-16 23:15:21,800 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689549321800"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549321800"}]},"ts":"1689549321800"} 2023-07-16 23:15:21,802 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-16 23:15:21,802 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure f8c9eb4dc8325188c8ee7648ac1d3697, server=jenkins-hbase4.apache.org,38989,1689549296125 in 164 msec 2023-07-16 23:15:21,803 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=f8c9eb4dc8325188c8ee7648ac1d3697, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41683,1689549296507; forceNewPlan=false, retain=false 2023-07-16 23:15:21,953 INFO [jenkins-hbase4:37359] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 23:15:21,954 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=f8c9eb4dc8325188c8ee7648ac1d3697, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:21,954 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689549321954"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549321954"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549321954"}]},"ts":"1689549321954"} 2023-07-16 23:15:21,956 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure f8c9eb4dc8325188c8ee7648ac1d3697, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:22,111 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:22,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f8c9eb4dc8325188c8ee7648ac1d3697, NAME => 'testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:22,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:22,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:22,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:22,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:22,114 INFO [StoreOpener-f8c9eb4dc8325188c8ee7648ac1d3697-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:22,116 DEBUG [StoreOpener-f8c9eb4dc8325188c8ee7648ac1d3697-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/testRename/f8c9eb4dc8325188c8ee7648ac1d3697/tr 2023-07-16 23:15:22,116 DEBUG [StoreOpener-f8c9eb4dc8325188c8ee7648ac1d3697-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/testRename/f8c9eb4dc8325188c8ee7648ac1d3697/tr 2023-07-16 23:15:22,116 INFO [StoreOpener-f8c9eb4dc8325188c8ee7648ac1d3697-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f8c9eb4dc8325188c8ee7648ac1d3697 columnFamilyName tr 2023-07-16 23:15:22,117 INFO [StoreOpener-f8c9eb4dc8325188c8ee7648ac1d3697-1] regionserver.HStore(310): Store=f8c9eb4dc8325188c8ee7648ac1d3697/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:22,118 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/testRename/f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:22,120 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/testRename/f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:22,136 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:22,138 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f8c9eb4dc8325188c8ee7648ac1d3697; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9901278880, jitterRate=-0.07787154614925385}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:22,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f8c9eb4dc8325188c8ee7648ac1d3697: 2023-07-16 23:15:22,139 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697., pid=131, masterSystemTime=1689549322107 2023-07-16 23:15:22,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:22,141 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:22,141 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=f8c9eb4dc8325188c8ee7648ac1d3697, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:22,142 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689549322141"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549322141"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549322141"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549322141"}]},"ts":"1689549322141"} 2023-07-16 23:15:22,145 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-16 23:15:22,146 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure f8c9eb4dc8325188c8ee7648ac1d3697, server=jenkins-hbase4.apache.org,41683,1689549296507 in 188 msec 2023-07-16 23:15:22,147 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=f8c9eb4dc8325188c8ee7648ac1d3697, REOPEN/MOVE in 511 msec 2023-07-16 23:15:22,489 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-16 23:15:22,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-16 23:15:22,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-16 23:15:22,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:22,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989] to rsgroup default 2023-07-16 23:15:22,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:22,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 23:15:22,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:22,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-16 23:15:22,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33913,1689549296335, jenkins-hbase4.apache.org,38989,1689549296125] are moved back to newgroup 2023-07-16 23:15:22,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-16 23:15:22,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:22,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-16 23:15:22,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:22,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:22,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:22,651 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:22,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:22,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:22,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:22,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:22,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:22,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:22,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:22,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37359] to rsgroup master 2023-07-16 23:15:22,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:22,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 764 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42846 deadline: 1689550522671, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. 2023-07-16 23:15:22,671 WARN [Listener at localhost/40131] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:22,673 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:22,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:22,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:22,674 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989, jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:43561], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:22,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:22,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:22,697 INFO [Listener at localhost/40131] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=508 (was 515), OpenFileDescriptor=772 (was 774), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=434 (was 446), ProcessCount=176 (was 176), AvailableMemoryMB=2756 (was 2806) 2023-07-16 23:15:22,697 WARN [Listener at localhost/40131] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-16 23:15:22,716 INFO [Listener at localhost/40131] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=508, OpenFileDescriptor=772, MaxFileDescriptor=60000, SystemLoadAverage=434, ProcessCount=176, AvailableMemoryMB=2756 2023-07-16 23:15:22,716 WARN [Listener at localhost/40131] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-16 23:15:22,717 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-16 23:15:22,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:22,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:22,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:22,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:22,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:22,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:22,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:22,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:22,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:22,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:22,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:22,733 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:22,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:22,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:22,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:22,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:22,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:22,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:22,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:22,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37359] to rsgroup master 2023-07-16 23:15:22,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:22,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 792 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42846 deadline: 1689550522744, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. 2023-07-16 23:15:22,745 WARN [Listener at localhost/40131] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:22,746 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:22,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:22,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:22,747 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989, jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:43561], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:22,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:22,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:22,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-16 23:15:22,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 23:15:22,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-16 23:15:22,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-16 23:15:22,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-16 23:15:22,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:22,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-16 23:15:22,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:22,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 804 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:42846 deadline: 1689550522756, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-16 23:15:22,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-16 23:15:22,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:22,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 807 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:42846 deadline: 1689550522759, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-16 23:15:22,764 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-16 23:15:22,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-16 23:15:22,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-16 23:15:22,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:22,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 811 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:42846 deadline: 1689550522770, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-16 23:15:22,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:22,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:22,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:22,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:22,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:22,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:22,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:22,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:22,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:22,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:22,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:22,787 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:22,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:22,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:22,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:22,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:22,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:22,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:22,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:22,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37359] to rsgroup master 2023-07-16 23:15:22,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:22,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 835 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42846 deadline: 1689550522798, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. 2023-07-16 23:15:22,802 WARN [Listener at localhost/40131] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:22,803 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:22,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:22,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:22,804 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989, jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:43561], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:22,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:22,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:22,826 INFO [Listener at localhost/40131] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=512 (was 508) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-29 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cf74ee0-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-30 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cf74ee0-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=772 (was 772), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=434 (was 434), ProcessCount=176 (was 176), AvailableMemoryMB=2755 (was 2756) 2023-07-16 23:15:22,827 WARN [Listener at localhost/40131] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-16 23:15:22,845 INFO [Listener at localhost/40131] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=512, OpenFileDescriptor=772, MaxFileDescriptor=60000, SystemLoadAverage=434, ProcessCount=176, AvailableMemoryMB=2755 2023-07-16 23:15:22,845 WARN [Listener at localhost/40131] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-16 23:15:22,845 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-16 23:15:22,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:22,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:22,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:22,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:22,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:22,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:22,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:22,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:22,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:22,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:22,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:22,861 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:22,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:22,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:22,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:22,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:22,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:22,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:22,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:22,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37359] to rsgroup master 2023-07-16 23:15:22,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:22,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 863 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42846 deadline: 1689550522870, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. 2023-07-16 23:15:22,871 WARN [Listener at localhost/40131] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:22,873 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:22,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:22,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:22,874 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989, jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:43561], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:22,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:22,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:22,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:22,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:22,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_81895074 2023-07-16 23:15:22,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:22,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:22,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_81895074 2023-07-16 23:15:22,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:22,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:22,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:22,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:22,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989] to rsgroup Group_testDisabledTableMove_81895074 2023-07-16 23:15:22,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:22,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_81895074 2023-07-16 23:15:22,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:22,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:22,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 23:15:22,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33913,1689549296335, jenkins-hbase4.apache.org,38989,1689549296125] are moved back to default 2023-07-16 23:15:22,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_81895074 2023-07-16 23:15:22,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:22,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:22,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:22,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_81895074 2023-07-16 23:15:22,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:22,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:22,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-16 23:15:22,903 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 23:15:22,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 132 2023-07-16 23:15:22,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-16 23:15:22,905 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:22,905 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_81895074 2023-07-16 23:15:22,906 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:22,906 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:22,908 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 23:15:22,912 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/9fb5d5b56b3b03d94fea78c78f9c406d 2023-07-16 23:15:22,912 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/7f84894ff213e2bee187a7ab6b14f954 2023-07-16 23:15:22,912 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/b4715ffe71b486d5a89e649d513a7559 2023-07-16 23:15:22,912 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/386f23ef5ce0ad987d693fdf3cbce6a9 2023-07-16 23:15:22,912 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/98fe108690ac41e3c0a831c5a632c946 2023-07-16 23:15:22,912 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/9fb5d5b56b3b03d94fea78c78f9c406d empty. 2023-07-16 23:15:22,912 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/b4715ffe71b486d5a89e649d513a7559 empty. 2023-07-16 23:15:22,913 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/386f23ef5ce0ad987d693fdf3cbce6a9 empty. 2023-07-16 23:15:22,913 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/7f84894ff213e2bee187a7ab6b14f954 empty. 2023-07-16 23:15:22,913 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/98fe108690ac41e3c0a831c5a632c946 empty. 2023-07-16 23:15:22,913 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/9fb5d5b56b3b03d94fea78c78f9c406d 2023-07-16 23:15:22,913 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/b4715ffe71b486d5a89e649d513a7559 2023-07-16 23:15:22,913 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/386f23ef5ce0ad987d693fdf3cbce6a9 2023-07-16 23:15:22,913 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/98fe108690ac41e3c0a831c5a632c946 2023-07-16 23:15:22,913 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/7f84894ff213e2bee187a7ab6b14f954 2023-07-16 23:15:22,913 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-16 23:15:22,927 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-16 23:15:22,928 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 386f23ef5ce0ad987d693fdf3cbce6a9, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:22,928 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 98fe108690ac41e3c0a831c5a632c946, NAME => 'Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:22,928 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9fb5d5b56b3b03d94fea78c78f9c406d, NAME => 'Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:22,951 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:22,951 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 98fe108690ac41e3c0a831c5a632c946, disabling compactions & flushes 2023-07-16 23:15:22,951 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946. 2023-07-16 23:15:22,951 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946. 2023-07-16 23:15:22,951 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946. after waiting 0 ms 2023-07-16 23:15:22,951 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946. 2023-07-16 23:15:22,951 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946. 2023-07-16 23:15:22,951 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 98fe108690ac41e3c0a831c5a632c946: 2023-07-16 23:15:22,952 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 7f84894ff213e2bee187a7ab6b14f954, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:22,955 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:22,955 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 9fb5d5b56b3b03d94fea78c78f9c406d, disabling compactions & flushes 2023-07-16 23:15:22,955 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d. 2023-07-16 23:15:22,955 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d. 2023-07-16 23:15:22,955 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d. after waiting 0 ms 2023-07-16 23:15:22,955 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d. 2023-07-16 23:15:22,955 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d. 2023-07-16 23:15:22,955 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 9fb5d5b56b3b03d94fea78c78f9c406d: 2023-07-16 23:15:22,956 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => b4715ffe71b486d5a89e649d513a7559, NAME => 'Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp 2023-07-16 23:15:22,957 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:22,957 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 386f23ef5ce0ad987d693fdf3cbce6a9, disabling compactions & flushes 2023-07-16 23:15:22,957 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9. 2023-07-16 23:15:22,957 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9. 2023-07-16 23:15:22,957 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9. after waiting 0 ms 2023-07-16 23:15:22,957 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9. 2023-07-16 23:15:22,957 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9. 2023-07-16 23:15:22,957 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 386f23ef5ce0ad987d693fdf3cbce6a9: 2023-07-16 23:15:22,967 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:22,967 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 7f84894ff213e2bee187a7ab6b14f954, disabling compactions & flushes 2023-07-16 23:15:22,967 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954. 2023-07-16 23:15:22,967 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954. 2023-07-16 23:15:22,967 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954. after waiting 0 ms 2023-07-16 23:15:22,967 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954. 2023-07-16 23:15:22,967 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954. 2023-07-16 23:15:22,967 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 7f84894ff213e2bee187a7ab6b14f954: 2023-07-16 23:15:22,971 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:22,971 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing b4715ffe71b486d5a89e649d513a7559, disabling compactions & flushes 2023-07-16 23:15:22,971 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559. 2023-07-16 23:15:22,972 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559. 2023-07-16 23:15:22,972 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559. after waiting 0 ms 2023-07-16 23:15:22,972 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559. 2023-07-16 23:15:22,972 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559. 2023-07-16 23:15:22,972 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for b4715ffe71b486d5a89e649d513a7559: 2023-07-16 23:15:22,974 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 23:15:22,975 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689549322975"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549322975"}]},"ts":"1689549322975"} 2023-07-16 23:15:22,975 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689549322975"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549322975"}]},"ts":"1689549322975"} 2023-07-16 23:15:22,975 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689549322975"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549322975"}]},"ts":"1689549322975"} 2023-07-16 23:15:22,975 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689549322975"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549322975"}]},"ts":"1689549322975"} 2023-07-16 23:15:22,975 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689549322975"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549322975"}]},"ts":"1689549322975"} 2023-07-16 23:15:22,977 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-16 23:15:22,978 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 23:15:22,978 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549322978"}]},"ts":"1689549322978"} 2023-07-16 23:15:22,979 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-16 23:15:22,983 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:22,983 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:22,983 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:22,983 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:22,983 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9fb5d5b56b3b03d94fea78c78f9c406d, ASSIGN}, {pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=98fe108690ac41e3c0a831c5a632c946, ASSIGN}, {pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=386f23ef5ce0ad987d693fdf3cbce6a9, ASSIGN}, {pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7f84894ff213e2bee187a7ab6b14f954, ASSIGN}, {pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b4715ffe71b486d5a89e649d513a7559, ASSIGN}] 2023-07-16 23:15:22,986 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9fb5d5b56b3b03d94fea78c78f9c406d, ASSIGN 2023-07-16 23:15:22,986 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7f84894ff213e2bee187a7ab6b14f954, ASSIGN 2023-07-16 23:15:22,986 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=98fe108690ac41e3c0a831c5a632c946, ASSIGN 2023-07-16 23:15:22,986 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=386f23ef5ce0ad987d693fdf3cbce6a9, ASSIGN 2023-07-16 23:15:22,986 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9fb5d5b56b3b03d94fea78c78f9c406d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43561,1689549300217; forceNewPlan=false, retain=false 2023-07-16 23:15:22,987 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b4715ffe71b486d5a89e649d513a7559, ASSIGN 2023-07-16 23:15:22,987 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=98fe108690ac41e3c0a831c5a632c946, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41683,1689549296507; forceNewPlan=false, retain=false 2023-07-16 23:15:22,987 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=386f23ef5ce0ad987d693fdf3cbce6a9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41683,1689549296507; forceNewPlan=false, retain=false 2023-07-16 23:15:22,987 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7f84894ff213e2bee187a7ab6b14f954, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43561,1689549300217; forceNewPlan=false, retain=false 2023-07-16 23:15:22,987 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b4715ffe71b486d5a89e649d513a7559, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41683,1689549296507; forceNewPlan=false, retain=false 2023-07-16 23:15:23,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-16 23:15:23,137 INFO [jenkins-hbase4:37359] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-16 23:15:23,141 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=b4715ffe71b486d5a89e649d513a7559, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:23,141 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=98fe108690ac41e3c0a831c5a632c946, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:23,141 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689549323141"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549323141"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549323141"}]},"ts":"1689549323141"} 2023-07-16 23:15:23,141 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=386f23ef5ce0ad987d693fdf3cbce6a9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:23,141 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=7f84894ff213e2bee187a7ab6b14f954, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:23,141 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=9fb5d5b56b3b03d94fea78c78f9c406d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:23,141 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689549323141"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549323141"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549323141"}]},"ts":"1689549323141"} 2023-07-16 23:15:23,141 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689549323141"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549323141"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549323141"}]},"ts":"1689549323141"} 2023-07-16 23:15:23,141 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689549323141"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549323141"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549323141"}]},"ts":"1689549323141"} 2023-07-16 23:15:23,141 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689549323141"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549323141"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549323141"}]},"ts":"1689549323141"} 2023-07-16 23:15:23,143 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=137, state=RUNNABLE; OpenRegionProcedure b4715ffe71b486d5a89e649d513a7559, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:23,143 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=135, state=RUNNABLE; OpenRegionProcedure 386f23ef5ce0ad987d693fdf3cbce6a9, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:23,144 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=133, state=RUNNABLE; OpenRegionProcedure 9fb5d5b56b3b03d94fea78c78f9c406d, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:23,145 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=134, state=RUNNABLE; OpenRegionProcedure 98fe108690ac41e3c0a831c5a632c946, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:23,147 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=136, state=RUNNABLE; OpenRegionProcedure 7f84894ff213e2bee187a7ab6b14f954, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:23,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-16 23:15:23,298 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9. 2023-07-16 23:15:23,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 386f23ef5ce0ad987d693fdf3cbce6a9, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-16 23:15:23,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 386f23ef5ce0ad987d693fdf3cbce6a9 2023-07-16 23:15:23,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:23,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 386f23ef5ce0ad987d693fdf3cbce6a9 2023-07-16 23:15:23,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 386f23ef5ce0ad987d693fdf3cbce6a9 2023-07-16 23:15:23,300 INFO [StoreOpener-386f23ef5ce0ad987d693fdf3cbce6a9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 386f23ef5ce0ad987d693fdf3cbce6a9 2023-07-16 23:15:23,300 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d. 2023-07-16 23:15:23,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9fb5d5b56b3b03d94fea78c78f9c406d, NAME => 'Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-16 23:15:23,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 9fb5d5b56b3b03d94fea78c78f9c406d 2023-07-16 23:15:23,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:23,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9fb5d5b56b3b03d94fea78c78f9c406d 2023-07-16 23:15:23,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9fb5d5b56b3b03d94fea78c78f9c406d 2023-07-16 23:15:23,302 DEBUG [StoreOpener-386f23ef5ce0ad987d693fdf3cbce6a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/386f23ef5ce0ad987d693fdf3cbce6a9/f 2023-07-16 23:15:23,302 DEBUG [StoreOpener-386f23ef5ce0ad987d693fdf3cbce6a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/386f23ef5ce0ad987d693fdf3cbce6a9/f 2023-07-16 23:15:23,302 INFO [StoreOpener-386f23ef5ce0ad987d693fdf3cbce6a9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 386f23ef5ce0ad987d693fdf3cbce6a9 columnFamilyName f 2023-07-16 23:15:23,302 INFO [StoreOpener-9fb5d5b56b3b03d94fea78c78f9c406d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9fb5d5b56b3b03d94fea78c78f9c406d 2023-07-16 23:15:23,303 INFO [StoreOpener-386f23ef5ce0ad987d693fdf3cbce6a9-1] regionserver.HStore(310): Store=386f23ef5ce0ad987d693fdf3cbce6a9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:23,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/386f23ef5ce0ad987d693fdf3cbce6a9 2023-07-16 23:15:23,304 DEBUG [StoreOpener-9fb5d5b56b3b03d94fea78c78f9c406d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/9fb5d5b56b3b03d94fea78c78f9c406d/f 2023-07-16 23:15:23,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/386f23ef5ce0ad987d693fdf3cbce6a9 2023-07-16 23:15:23,304 DEBUG [StoreOpener-9fb5d5b56b3b03d94fea78c78f9c406d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/9fb5d5b56b3b03d94fea78c78f9c406d/f 2023-07-16 23:15:23,304 INFO [StoreOpener-9fb5d5b56b3b03d94fea78c78f9c406d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9fb5d5b56b3b03d94fea78c78f9c406d columnFamilyName f 2023-07-16 23:15:23,305 INFO [StoreOpener-9fb5d5b56b3b03d94fea78c78f9c406d-1] regionserver.HStore(310): Store=9fb5d5b56b3b03d94fea78c78f9c406d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:23,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/9fb5d5b56b3b03d94fea78c78f9c406d 2023-07-16 23:15:23,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/9fb5d5b56b3b03d94fea78c78f9c406d 2023-07-16 23:15:23,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 386f23ef5ce0ad987d693fdf3cbce6a9 2023-07-16 23:15:23,309 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/386f23ef5ce0ad987d693fdf3cbce6a9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:23,309 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9fb5d5b56b3b03d94fea78c78f9c406d 2023-07-16 23:15:23,309 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 386f23ef5ce0ad987d693fdf3cbce6a9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9825667520, jitterRate=-0.08491340279579163}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:23,309 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 386f23ef5ce0ad987d693fdf3cbce6a9: 2023-07-16 23:15:23,310 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9., pid=139, masterSystemTime=1689549323294 2023-07-16 23:15:23,311 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/9fb5d5b56b3b03d94fea78c78f9c406d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:23,311 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9fb5d5b56b3b03d94fea78c78f9c406d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11871534720, jitterRate=0.10562282800674438}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:23,311 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9fb5d5b56b3b03d94fea78c78f9c406d: 2023-07-16 23:15:23,311 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9. 2023-07-16 23:15:23,312 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9. 2023-07-16 23:15:23,312 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946. 2023-07-16 23:15:23,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 98fe108690ac41e3c0a831c5a632c946, NAME => 'Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-16 23:15:23,312 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d., pid=140, masterSystemTime=1689549323297 2023-07-16 23:15:23,312 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=386f23ef5ce0ad987d693fdf3cbce6a9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:23,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 98fe108690ac41e3c0a831c5a632c946 2023-07-16 23:15:23,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:23,312 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689549323312"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549323312"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549323312"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549323312"}]},"ts":"1689549323312"} 2023-07-16 23:15:23,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 98fe108690ac41e3c0a831c5a632c946 2023-07-16 23:15:23,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 98fe108690ac41e3c0a831c5a632c946 2023-07-16 23:15:23,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d. 2023-07-16 23:15:23,313 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d. 2023-07-16 23:15:23,313 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954. 2023-07-16 23:15:23,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7f84894ff213e2bee187a7ab6b14f954, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-16 23:15:23,313 INFO [StoreOpener-98fe108690ac41e3c0a831c5a632c946-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 98fe108690ac41e3c0a831c5a632c946 2023-07-16 23:15:23,313 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=9fb5d5b56b3b03d94fea78c78f9c406d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:23,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 7f84894ff213e2bee187a7ab6b14f954 2023-07-16 23:15:23,314 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:23,314 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689549323313"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549323313"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549323313"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549323313"}]},"ts":"1689549323313"} 2023-07-16 23:15:23,314 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7f84894ff213e2bee187a7ab6b14f954 2023-07-16 23:15:23,314 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7f84894ff213e2bee187a7ab6b14f954 2023-07-16 23:15:23,315 DEBUG [StoreOpener-98fe108690ac41e3c0a831c5a632c946-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/98fe108690ac41e3c0a831c5a632c946/f 2023-07-16 23:15:23,315 INFO [StoreOpener-7f84894ff213e2bee187a7ab6b14f954-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7f84894ff213e2bee187a7ab6b14f954 2023-07-16 23:15:23,315 DEBUG [StoreOpener-98fe108690ac41e3c0a831c5a632c946-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/98fe108690ac41e3c0a831c5a632c946/f 2023-07-16 23:15:23,316 INFO [StoreOpener-98fe108690ac41e3c0a831c5a632c946-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 98fe108690ac41e3c0a831c5a632c946 columnFamilyName f 2023-07-16 23:15:23,316 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=135 2023-07-16 23:15:23,316 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=135, state=SUCCESS; OpenRegionProcedure 386f23ef5ce0ad987d693fdf3cbce6a9, server=jenkins-hbase4.apache.org,41683,1689549296507 in 171 msec 2023-07-16 23:15:23,316 INFO [StoreOpener-98fe108690ac41e3c0a831c5a632c946-1] regionserver.HStore(310): Store=98fe108690ac41e3c0a831c5a632c946/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:23,316 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=133 2023-07-16 23:15:23,317 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=133, state=SUCCESS; OpenRegionProcedure 9fb5d5b56b3b03d94fea78c78f9c406d, server=jenkins-hbase4.apache.org,43561,1689549300217 in 171 msec 2023-07-16 23:15:23,317 DEBUG [StoreOpener-7f84894ff213e2bee187a7ab6b14f954-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/7f84894ff213e2bee187a7ab6b14f954/f 2023-07-16 23:15:23,317 DEBUG [StoreOpener-7f84894ff213e2bee187a7ab6b14f954-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/7f84894ff213e2bee187a7ab6b14f954/f 2023-07-16 23:15:23,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/98fe108690ac41e3c0a831c5a632c946 2023-07-16 23:15:23,317 INFO [StoreOpener-7f84894ff213e2bee187a7ab6b14f954-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7f84894ff213e2bee187a7ab6b14f954 columnFamilyName f 2023-07-16 23:15:23,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/98fe108690ac41e3c0a831c5a632c946 2023-07-16 23:15:23,318 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=386f23ef5ce0ad987d693fdf3cbce6a9, ASSIGN in 333 msec 2023-07-16 23:15:23,318 INFO [StoreOpener-7f84894ff213e2bee187a7ab6b14f954-1] regionserver.HStore(310): Store=7f84894ff213e2bee187a7ab6b14f954/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:23,318 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9fb5d5b56b3b03d94fea78c78f9c406d, ASSIGN in 333 msec 2023-07-16 23:15:23,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/7f84894ff213e2bee187a7ab6b14f954 2023-07-16 23:15:23,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/7f84894ff213e2bee187a7ab6b14f954 2023-07-16 23:15:23,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 98fe108690ac41e3c0a831c5a632c946 2023-07-16 23:15:23,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7f84894ff213e2bee187a7ab6b14f954 2023-07-16 23:15:23,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/98fe108690ac41e3c0a831c5a632c946/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:23,323 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 98fe108690ac41e3c0a831c5a632c946; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11910643200, jitterRate=0.10926508903503418}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:23,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 98fe108690ac41e3c0a831c5a632c946: 2023-07-16 23:15:23,323 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946., pid=141, masterSystemTime=1689549323294 2023-07-16 23:15:23,324 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946. 2023-07-16 23:15:23,325 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946. 2023-07-16 23:15:23,325 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559. 2023-07-16 23:15:23,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b4715ffe71b486d5a89e649d513a7559, NAME => 'Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-16 23:15:23,325 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=98fe108690ac41e3c0a831c5a632c946, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:23,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove b4715ffe71b486d5a89e649d513a7559 2023-07-16 23:15:23,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:23,325 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689549323325"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549323325"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549323325"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549323325"}]},"ts":"1689549323325"} 2023-07-16 23:15:23,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b4715ffe71b486d5a89e649d513a7559 2023-07-16 23:15:23,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b4715ffe71b486d5a89e649d513a7559 2023-07-16 23:15:23,328 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=134 2023-07-16 23:15:23,328 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=134, state=SUCCESS; OpenRegionProcedure 98fe108690ac41e3c0a831c5a632c946, server=jenkins-hbase4.apache.org,41683,1689549296507 in 181 msec 2023-07-16 23:15:23,329 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=98fe108690ac41e3c0a831c5a632c946, ASSIGN in 345 msec 2023-07-16 23:15:23,331 INFO [StoreOpener-b4715ffe71b486d5a89e649d513a7559-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b4715ffe71b486d5a89e649d513a7559 2023-07-16 23:15:23,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/7f84894ff213e2bee187a7ab6b14f954/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:23,332 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7f84894ff213e2bee187a7ab6b14f954; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10910367840, jitterRate=0.016107186675071716}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:23,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7f84894ff213e2bee187a7ab6b14f954: 2023-07-16 23:15:23,332 DEBUG [StoreOpener-b4715ffe71b486d5a89e649d513a7559-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/b4715ffe71b486d5a89e649d513a7559/f 2023-07-16 23:15:23,332 DEBUG [StoreOpener-b4715ffe71b486d5a89e649d513a7559-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/b4715ffe71b486d5a89e649d513a7559/f 2023-07-16 23:15:23,332 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954., pid=142, masterSystemTime=1689549323297 2023-07-16 23:15:23,332 INFO [StoreOpener-b4715ffe71b486d5a89e649d513a7559-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b4715ffe71b486d5a89e649d513a7559 columnFamilyName f 2023-07-16 23:15:23,333 INFO [StoreOpener-b4715ffe71b486d5a89e649d513a7559-1] regionserver.HStore(310): Store=b4715ffe71b486d5a89e649d513a7559/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:23,333 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954. 2023-07-16 23:15:23,333 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954. 2023-07-16 23:15:23,334 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=7f84894ff213e2bee187a7ab6b14f954, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:23,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/b4715ffe71b486d5a89e649d513a7559 2023-07-16 23:15:23,334 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689549323334"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549323334"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549323334"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549323334"}]},"ts":"1689549323334"} 2023-07-16 23:15:23,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/b4715ffe71b486d5a89e649d513a7559 2023-07-16 23:15:23,336 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=136 2023-07-16 23:15:23,336 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=136, state=SUCCESS; OpenRegionProcedure 7f84894ff213e2bee187a7ab6b14f954, server=jenkins-hbase4.apache.org,43561,1689549300217 in 188 msec 2023-07-16 23:15:23,337 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b4715ffe71b486d5a89e649d513a7559 2023-07-16 23:15:23,337 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7f84894ff213e2bee187a7ab6b14f954, ASSIGN in 353 msec 2023-07-16 23:15:23,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/b4715ffe71b486d5a89e649d513a7559/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:23,339 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b4715ffe71b486d5a89e649d513a7559; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11169603360, jitterRate=0.040250375866889954}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:23,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b4715ffe71b486d5a89e649d513a7559: 2023-07-16 23:15:23,340 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559., pid=138, masterSystemTime=1689549323294 2023-07-16 23:15:23,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559. 2023-07-16 23:15:23,341 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559. 2023-07-16 23:15:23,341 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=b4715ffe71b486d5a89e649d513a7559, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:23,341 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689549323341"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549323341"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549323341"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549323341"}]},"ts":"1689549323341"} 2023-07-16 23:15:23,343 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=137 2023-07-16 23:15:23,343 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=137, state=SUCCESS; OpenRegionProcedure b4715ffe71b486d5a89e649d513a7559, server=jenkins-hbase4.apache.org,41683,1689549296507 in 199 msec 2023-07-16 23:15:23,344 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=132 2023-07-16 23:15:23,345 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b4715ffe71b486d5a89e649d513a7559, ASSIGN in 360 msec 2023-07-16 23:15:23,345 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 23:15:23,345 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549323345"}]},"ts":"1689549323345"} 2023-07-16 23:15:23,346 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-16 23:15:23,348 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 23:15:23,349 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 448 msec 2023-07-16 23:15:23,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-16 23:15:23,507 INFO [Listener at localhost/40131] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 132 completed 2023-07-16 23:15:23,507 DEBUG [Listener at localhost/40131] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-16 23:15:23,508 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:23,511 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-16 23:15:23,512 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:23,512 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-16 23:15:23,512 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:23,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-16 23:15:23,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 23:15:23,519 INFO [Listener at localhost/40131] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-16 23:15:23,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-16 23:15:23,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-16 23:15:23,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-16 23:15:23,523 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549323523"}]},"ts":"1689549323523"} 2023-07-16 23:15:23,525 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-16 23:15:23,526 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-16 23:15:23,527 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9fb5d5b56b3b03d94fea78c78f9c406d, UNASSIGN}, {pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=98fe108690ac41e3c0a831c5a632c946, UNASSIGN}, {pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=386f23ef5ce0ad987d693fdf3cbce6a9, UNASSIGN}, {pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7f84894ff213e2bee187a7ab6b14f954, UNASSIGN}, {pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b4715ffe71b486d5a89e649d513a7559, UNASSIGN}] 2023-07-16 23:15:23,528 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7f84894ff213e2bee187a7ab6b14f954, UNASSIGN 2023-07-16 23:15:23,530 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=98fe108690ac41e3c0a831c5a632c946, UNASSIGN 2023-07-16 23:15:23,530 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b4715ffe71b486d5a89e649d513a7559, UNASSIGN 2023-07-16 23:15:23,531 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=386f23ef5ce0ad987d693fdf3cbce6a9, UNASSIGN 2023-07-16 23:15:23,531 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9fb5d5b56b3b03d94fea78c78f9c406d, UNASSIGN 2023-07-16 23:15:23,531 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=7f84894ff213e2bee187a7ab6b14f954, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:23,531 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689549323531"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549323531"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549323531"}]},"ts":"1689549323531"} 2023-07-16 23:15:23,532 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=98fe108690ac41e3c0a831c5a632c946, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:23,532 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=386f23ef5ce0ad987d693fdf3cbce6a9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:23,532 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689549323532"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549323532"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549323532"}]},"ts":"1689549323532"} 2023-07-16 23:15:23,532 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=b4715ffe71b486d5a89e649d513a7559, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:23,532 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=9fb5d5b56b3b03d94fea78c78f9c406d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:23,532 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689549323532"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549323532"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549323532"}]},"ts":"1689549323532"} 2023-07-16 23:15:23,532 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689549323532"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549323532"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549323532"}]},"ts":"1689549323532"} 2023-07-16 23:15:23,532 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689549323532"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549323532"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549323532"}]},"ts":"1689549323532"} 2023-07-16 23:15:23,533 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=147, state=RUNNABLE; CloseRegionProcedure 7f84894ff213e2bee187a7ab6b14f954, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:23,534 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=145, state=RUNNABLE; CloseRegionProcedure 98fe108690ac41e3c0a831c5a632c946, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:23,534 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=146, state=RUNNABLE; CloseRegionProcedure 386f23ef5ce0ad987d693fdf3cbce6a9, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:23,535 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=144, state=RUNNABLE; CloseRegionProcedure 9fb5d5b56b3b03d94fea78c78f9c406d, server=jenkins-hbase4.apache.org,43561,1689549300217}] 2023-07-16 23:15:23,535 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=148, state=RUNNABLE; CloseRegionProcedure b4715ffe71b486d5a89e649d513a7559, server=jenkins-hbase4.apache.org,41683,1689549296507}] 2023-07-16 23:15:23,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-16 23:15:23,685 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7f84894ff213e2bee187a7ab6b14f954 2023-07-16 23:15:23,685 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 98fe108690ac41e3c0a831c5a632c946 2023-07-16 23:15:23,686 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7f84894ff213e2bee187a7ab6b14f954, disabling compactions & flushes 2023-07-16 23:15:23,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 98fe108690ac41e3c0a831c5a632c946, disabling compactions & flushes 2023-07-16 23:15:23,687 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954. 2023-07-16 23:15:23,687 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946. 2023-07-16 23:15:23,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954. 2023-07-16 23:15:23,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946. 2023-07-16 23:15:23,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954. after waiting 0 ms 2023-07-16 23:15:23,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954. 2023-07-16 23:15:23,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946. after waiting 0 ms 2023-07-16 23:15:23,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946. 2023-07-16 23:15:23,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/98fe108690ac41e3c0a831c5a632c946/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:23,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/7f84894ff213e2bee187a7ab6b14f954/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:23,692 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946. 2023-07-16 23:15:23,692 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954. 2023-07-16 23:15:23,692 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7f84894ff213e2bee187a7ab6b14f954: 2023-07-16 23:15:23,692 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 98fe108690ac41e3c0a831c5a632c946: 2023-07-16 23:15:23,693 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7f84894ff213e2bee187a7ab6b14f954 2023-07-16 23:15:23,693 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9fb5d5b56b3b03d94fea78c78f9c406d 2023-07-16 23:15:23,694 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9fb5d5b56b3b03d94fea78c78f9c406d, disabling compactions & flushes 2023-07-16 23:15:23,694 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d. 2023-07-16 23:15:23,694 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d. 2023-07-16 23:15:23,694 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d. after waiting 0 ms 2023-07-16 23:15:23,695 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d. 2023-07-16 23:15:23,695 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=7f84894ff213e2bee187a7ab6b14f954, regionState=CLOSED 2023-07-16 23:15:23,695 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689549323695"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549323695"}]},"ts":"1689549323695"} 2023-07-16 23:15:23,695 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 98fe108690ac41e3c0a831c5a632c946 2023-07-16 23:15:23,695 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 386f23ef5ce0ad987d693fdf3cbce6a9 2023-07-16 23:15:23,696 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 386f23ef5ce0ad987d693fdf3cbce6a9, disabling compactions & flushes 2023-07-16 23:15:23,696 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9. 2023-07-16 23:15:23,696 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9. 2023-07-16 23:15:23,696 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9. after waiting 0 ms 2023-07-16 23:15:23,696 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9. 2023-07-16 23:15:23,697 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=98fe108690ac41e3c0a831c5a632c946, regionState=CLOSED 2023-07-16 23:15:23,697 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689549323697"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549323697"}]},"ts":"1689549323697"} 2023-07-16 23:15:23,699 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/9fb5d5b56b3b03d94fea78c78f9c406d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:23,699 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=147 2023-07-16 23:15:23,699 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=147, state=SUCCESS; CloseRegionProcedure 7f84894ff213e2bee187a7ab6b14f954, server=jenkins-hbase4.apache.org,43561,1689549300217 in 164 msec 2023-07-16 23:15:23,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d. 2023-07-16 23:15:23,700 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9fb5d5b56b3b03d94fea78c78f9c406d: 2023-07-16 23:15:23,700 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=145 2023-07-16 23:15:23,700 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=145, state=SUCCESS; CloseRegionProcedure 98fe108690ac41e3c0a831c5a632c946, server=jenkins-hbase4.apache.org,41683,1689549296507 in 164 msec 2023-07-16 23:15:23,701 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7f84894ff213e2bee187a7ab6b14f954, UNASSIGN in 172 msec 2023-07-16 23:15:23,701 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9fb5d5b56b3b03d94fea78c78f9c406d 2023-07-16 23:15:23,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/386f23ef5ce0ad987d693fdf3cbce6a9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:23,702 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=98fe108690ac41e3c0a831c5a632c946, UNASSIGN in 173 msec 2023-07-16 23:15:23,702 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=9fb5d5b56b3b03d94fea78c78f9c406d, regionState=CLOSED 2023-07-16 23:15:23,702 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689549323702"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549323702"}]},"ts":"1689549323702"} 2023-07-16 23:15:23,702 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9. 2023-07-16 23:15:23,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 386f23ef5ce0ad987d693fdf3cbce6a9: 2023-07-16 23:15:23,703 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 386f23ef5ce0ad987d693fdf3cbce6a9 2023-07-16 23:15:23,703 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b4715ffe71b486d5a89e649d513a7559 2023-07-16 23:15:23,704 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b4715ffe71b486d5a89e649d513a7559, disabling compactions & flushes 2023-07-16 23:15:23,704 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559. 2023-07-16 23:15:23,704 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559. 2023-07-16 23:15:23,704 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559. after waiting 0 ms 2023-07-16 23:15:23,704 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559. 2023-07-16 23:15:23,704 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=386f23ef5ce0ad987d693fdf3cbce6a9, regionState=CLOSED 2023-07-16 23:15:23,705 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689549323704"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549323704"}]},"ts":"1689549323704"} 2023-07-16 23:15:23,706 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=144 2023-07-16 23:15:23,706 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=144, state=SUCCESS; CloseRegionProcedure 9fb5d5b56b3b03d94fea78c78f9c406d, server=jenkins-hbase4.apache.org,43561,1689549300217 in 168 msec 2023-07-16 23:15:23,707 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9fb5d5b56b3b03d94fea78c78f9c406d, UNASSIGN in 179 msec 2023-07-16 23:15:23,707 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=146 2023-07-16 23:15:23,707 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=146, state=SUCCESS; CloseRegionProcedure 386f23ef5ce0ad987d693fdf3cbce6a9, server=jenkins-hbase4.apache.org,41683,1689549296507 in 172 msec 2023-07-16 23:15:23,708 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=386f23ef5ce0ad987d693fdf3cbce6a9, UNASSIGN in 180 msec 2023-07-16 23:15:23,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/Group_testDisabledTableMove/b4715ffe71b486d5a89e649d513a7559/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:23,709 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559. 2023-07-16 23:15:23,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b4715ffe71b486d5a89e649d513a7559: 2023-07-16 23:15:23,710 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b4715ffe71b486d5a89e649d513a7559 2023-07-16 23:15:23,711 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=b4715ffe71b486d5a89e649d513a7559, regionState=CLOSED 2023-07-16 23:15:23,711 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689549323711"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549323711"}]},"ts":"1689549323711"} 2023-07-16 23:15:23,713 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=148 2023-07-16 23:15:23,713 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=148, state=SUCCESS; CloseRegionProcedure b4715ffe71b486d5a89e649d513a7559, server=jenkins-hbase4.apache.org,41683,1689549296507 in 177 msec 2023-07-16 23:15:23,714 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=143 2023-07-16 23:15:23,714 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b4715ffe71b486d5a89e649d513a7559, UNASSIGN in 186 msec 2023-07-16 23:15:23,715 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549323715"}]},"ts":"1689549323715"} 2023-07-16 23:15:23,716 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-16 23:15:23,718 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-16 23:15:23,720 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 199 msec 2023-07-16 23:15:23,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-16 23:15:23,826 INFO [Listener at localhost/40131] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 143 completed 2023-07-16 23:15:23,826 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_81895074 2023-07-16 23:15:23,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_81895074 2023-07-16 23:15:23,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:23,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_81895074 2023-07-16 23:15:23,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:23,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:23,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-16 23:15:23,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_81895074, current retry=0 2023-07-16 23:15:23,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_81895074. 2023-07-16 23:15:23,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:23,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:23,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:23,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-16 23:15:23,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 23:15:23,839 INFO [Listener at localhost/40131] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-16 23:15:23,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-16 23:15:23,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:23,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 923 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:42846 deadline: 1689549383839, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-16 23:15:23,840 DEBUG [Listener at localhost/40131] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-16 23:15:23,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-16 23:15:23,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] procedure2.ProcedureExecutor(1029): Stored pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 23:15:23,843 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 23:15:23,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_81895074' 2023-07-16 23:15:23,844 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=155, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 23:15:23,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:23,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_81895074 2023-07-16 23:15:23,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:23,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:23,850 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/9fb5d5b56b3b03d94fea78c78f9c406d 2023-07-16 23:15:23,850 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/b4715ffe71b486d5a89e649d513a7559 2023-07-16 23:15:23,850 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/7f84894ff213e2bee187a7ab6b14f954 2023-07-16 23:15:23,850 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/386f23ef5ce0ad987d693fdf3cbce6a9 2023-07-16 23:15:23,850 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/98fe108690ac41e3c0a831c5a632c946 2023-07-16 23:15:23,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-16 23:15:23,853 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/9fb5d5b56b3b03d94fea78c78f9c406d/f, FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/9fb5d5b56b3b03d94fea78c78f9c406d/recovered.edits] 2023-07-16 23:15:23,853 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/b4715ffe71b486d5a89e649d513a7559/f, FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/b4715ffe71b486d5a89e649d513a7559/recovered.edits] 2023-07-16 23:15:23,853 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/386f23ef5ce0ad987d693fdf3cbce6a9/f, FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/386f23ef5ce0ad987d693fdf3cbce6a9/recovered.edits] 2023-07-16 23:15:23,853 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/7f84894ff213e2bee187a7ab6b14f954/f, FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/7f84894ff213e2bee187a7ab6b14f954/recovered.edits] 2023-07-16 23:15:23,853 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/98fe108690ac41e3c0a831c5a632c946/f, FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/98fe108690ac41e3c0a831c5a632c946/recovered.edits] 2023-07-16 23:15:23,862 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/98fe108690ac41e3c0a831c5a632c946/recovered.edits/4.seqid to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/archive/data/default/Group_testDisabledTableMove/98fe108690ac41e3c0a831c5a632c946/recovered.edits/4.seqid 2023-07-16 23:15:23,862 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/9fb5d5b56b3b03d94fea78c78f9c406d/recovered.edits/4.seqid to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/archive/data/default/Group_testDisabledTableMove/9fb5d5b56b3b03d94fea78c78f9c406d/recovered.edits/4.seqid 2023-07-16 23:15:23,863 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/98fe108690ac41e3c0a831c5a632c946 2023-07-16 23:15:23,863 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/386f23ef5ce0ad987d693fdf3cbce6a9/recovered.edits/4.seqid to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/archive/data/default/Group_testDisabledTableMove/386f23ef5ce0ad987d693fdf3cbce6a9/recovered.edits/4.seqid 2023-07-16 23:15:23,863 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/9fb5d5b56b3b03d94fea78c78f9c406d 2023-07-16 23:15:23,863 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/7f84894ff213e2bee187a7ab6b14f954/recovered.edits/4.seqid to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/archive/data/default/Group_testDisabledTableMove/7f84894ff213e2bee187a7ab6b14f954/recovered.edits/4.seqid 2023-07-16 23:15:23,863 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/386f23ef5ce0ad987d693fdf3cbce6a9 2023-07-16 23:15:23,864 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/7f84894ff213e2bee187a7ab6b14f954 2023-07-16 23:15:23,864 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/b4715ffe71b486d5a89e649d513a7559/recovered.edits/4.seqid to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/archive/data/default/Group_testDisabledTableMove/b4715ffe71b486d5a89e649d513a7559/recovered.edits/4.seqid 2023-07-16 23:15:23,865 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/.tmp/data/default/Group_testDisabledTableMove/b4715ffe71b486d5a89e649d513a7559 2023-07-16 23:15:23,865 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-16 23:15:23,867 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=155, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 23:15:23,869 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-16 23:15:23,873 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-16 23:15:23,874 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=155, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 23:15:23,874 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-16 23:15:23,874 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549323874"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:23,875 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549323874"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:23,875 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549323874"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:23,875 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549323874"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:23,875 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549323874"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:23,876 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-16 23:15:23,876 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 9fb5d5b56b3b03d94fea78c78f9c406d, NAME => 'Group_testDisabledTableMove,,1689549322900.9fb5d5b56b3b03d94fea78c78f9c406d.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 98fe108690ac41e3c0a831c5a632c946, NAME => 'Group_testDisabledTableMove,aaaaa,1689549322900.98fe108690ac41e3c0a831c5a632c946.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 386f23ef5ce0ad987d693fdf3cbce6a9, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689549322900.386f23ef5ce0ad987d693fdf3cbce6a9.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 7f84894ff213e2bee187a7ab6b14f954, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689549322900.7f84894ff213e2bee187a7ab6b14f954.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => b4715ffe71b486d5a89e649d513a7559, NAME => 'Group_testDisabledTableMove,zzzzz,1689549322900.b4715ffe71b486d5a89e649d513a7559.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-16 23:15:23,876 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-16 23:15:23,876 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689549323876"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:23,878 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-16 23:15:23,880 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=155, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 23:15:23,881 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=155, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 39 msec 2023-07-16 23:15:23,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-16 23:15:23,954 INFO [Listener at localhost/40131] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 155 completed 2023-07-16 23:15:23,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:23,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:23,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:23,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:23,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:23,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989] to rsgroup default 2023-07-16 23:15:23,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:23,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_81895074 2023-07-16 23:15:23,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:23,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:23,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_81895074, current retry=0 2023-07-16 23:15:23,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33913,1689549296335, jenkins-hbase4.apache.org,38989,1689549296125] are moved back to Group_testDisabledTableMove_81895074 2023-07-16 23:15:23,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_81895074 => default 2023-07-16 23:15:23,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:23,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_81895074 2023-07-16 23:15:23,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:23,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:23,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 23:15:23,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:23,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:23,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:23,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:23,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:23,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:23,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:23,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:23,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:23,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:23,976 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:23,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:23,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:23,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:23,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:23,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:23,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:23,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:23,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37359] to rsgroup master 2023-07-16 23:15:23,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:23,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 957 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42846 deadline: 1689550523986, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. 2023-07-16 23:15:23,986 WARN [Listener at localhost/40131] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:23,988 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:23,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:23,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:23,989 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989, jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:43561], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:23,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:23,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:24,008 INFO [Listener at localhost/40131] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=513 (was 512) Potentially hanging thread: hconnection-0x29a77039-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_107485817_17 at /127.0.0.1:46300 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cf74ee0-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-184735455_17 at /127.0.0.1:53166 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=781 (was 772) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=434 (was 434), ProcessCount=176 (was 176), AvailableMemoryMB=2756 (was 2755) - AvailableMemoryMB LEAK? - 2023-07-16 23:15:24,008 WARN [Listener at localhost/40131] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-16 23:15:24,024 INFO [Listener at localhost/40131] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=513, OpenFileDescriptor=781, MaxFileDescriptor=60000, SystemLoadAverage=434, ProcessCount=176, AvailableMemoryMB=2755 2023-07-16 23:15:24,024 WARN [Listener at localhost/40131] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-16 23:15:24,025 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-16 23:15:24,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:24,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:24,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:24,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:24,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:24,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:24,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:24,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:24,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:24,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:24,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:24,037 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:24,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:24,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:24,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:24,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:24,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:24,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:24,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:24,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37359] to rsgroup master 2023-07-16 23:15:24,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:24,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 985 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42846 deadline: 1689550524054, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. 2023-07-16 23:15:24,054 WARN [Listener at localhost/40131] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37359 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:24,056 INFO [Listener at localhost/40131] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:24,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:24,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:24,057 INFO [Listener at localhost/40131] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33913, jenkins-hbase4.apache.org:38989, jenkins-hbase4.apache.org:41683, jenkins-hbase4.apache.org:43561], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:24,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:24,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:24,058 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-16 23:15:24,058 INFO [Listener at localhost/40131] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-16 23:15:24,058 DEBUG [Listener at localhost/40131] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5b290534 to 127.0.0.1:63904 2023-07-16 23:15:24,058 DEBUG [Listener at localhost/40131] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:24,060 DEBUG [Listener at localhost/40131] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-16 23:15:24,060 DEBUG [Listener at localhost/40131] util.JVMClusterUtil(257): Found active master hash=988988473, stopped=false 2023-07-16 23:15:24,060 DEBUG [Listener at localhost/40131] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 23:15:24,061 DEBUG [Listener at localhost/40131] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 23:15:24,061 INFO [Listener at localhost/40131] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,37359,1689549294108 2023-07-16 23:15:24,062 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:24,062 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:24,062 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:43561-0x101706ac992000b, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:24,062 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:24,062 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:24,062 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:24,062 INFO [Listener at localhost/40131] procedure2.ProcedureExecutor(629): Stopping 2023-07-16 23:15:24,063 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:24,063 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:24,063 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:24,063 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:24,063 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43561-0x101706ac992000b, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:24,063 DEBUG [Listener at localhost/40131] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2626cde5 to 127.0.0.1:63904 2023-07-16 23:15:24,063 DEBUG [Listener at localhost/40131] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:24,063 INFO [Listener at localhost/40131] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38989,1689549296125' ***** 2023-07-16 23:15:24,064 INFO [Listener at localhost/40131] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 23:15:24,064 INFO [Listener at localhost/40131] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33913,1689549296335' ***** 2023-07-16 23:15:24,064 INFO [RS:0;jenkins-hbase4:38989] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 23:15:24,064 INFO [Listener at localhost/40131] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 23:15:24,064 INFO [Listener at localhost/40131] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41683,1689549296507' ***** 2023-07-16 23:15:24,064 INFO [Listener at localhost/40131] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 23:15:24,064 INFO [RS:1;jenkins-hbase4:33913] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 23:15:24,064 INFO [Listener at localhost/40131] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43561,1689549300217' ***** 2023-07-16 23:15:24,065 INFO [Listener at localhost/40131] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 23:15:24,064 INFO [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 23:15:24,067 INFO [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 23:15:24,083 INFO [RS:1;jenkins-hbase4:33913] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@333da51b{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:15:24,083 INFO [RS:0;jenkins-hbase4:38989] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2ade5edf{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:15:24,083 INFO [RS:2;jenkins-hbase4:41683] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@71934adb{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:15:24,083 INFO [RS:3;jenkins-hbase4:43561] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6c9a0ae6{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:15:24,087 INFO [RS:3;jenkins-hbase4:43561] server.AbstractConnector(383): Stopped ServerConnector@4c8b9b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 23:15:24,087 INFO [RS:2;jenkins-hbase4:41683] server.AbstractConnector(383): Stopped ServerConnector@4c5e3ae5{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 23:15:24,087 INFO [RS:1;jenkins-hbase4:33913] server.AbstractConnector(383): Stopped ServerConnector@25647a91{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 23:15:24,087 INFO [RS:0;jenkins-hbase4:38989] server.AbstractConnector(383): Stopped ServerConnector@3e4a8ce8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 23:15:24,087 INFO [RS:1;jenkins-hbase4:33913] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 23:15:24,087 INFO [RS:2;jenkins-hbase4:41683] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 23:15:24,087 INFO [RS:3;jenkins-hbase4:43561] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 23:15:24,088 INFO [RS:1;jenkins-hbase4:33913] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@129754f6{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 23:15:24,087 INFO [RS:0;jenkins-hbase4:38989] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 23:15:24,089 INFO [RS:1;jenkins-hbase4:33913] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@32f14bae{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/hadoop.log.dir/,STOPPED} 2023-07-16 23:15:24,089 INFO [RS:3;jenkins-hbase4:43561] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1439103a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 23:15:24,089 INFO [RS:2;jenkins-hbase4:41683] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1a254359{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 23:15:24,090 INFO [RS:0;jenkins-hbase4:38989] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@655f7375{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 23:15:24,091 INFO [RS:2;jenkins-hbase4:41683] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@513690f4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/hadoop.log.dir/,STOPPED} 2023-07-16 23:15:24,090 INFO [RS:3;jenkins-hbase4:43561] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@252d6de0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/hadoop.log.dir/,STOPPED} 2023-07-16 23:15:24,092 INFO [RS:0;jenkins-hbase4:38989] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@21e06b66{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/hadoop.log.dir/,STOPPED} 2023-07-16 23:15:24,093 INFO [RS:3;jenkins-hbase4:43561] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 23:15:24,094 INFO [RS:0;jenkins-hbase4:38989] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 23:15:24,094 INFO [RS:3;jenkins-hbase4:43561] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 23:15:24,094 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 23:15:24,094 INFO [RS:3;jenkins-hbase4:43561] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 23:15:24,094 INFO [RS:2;jenkins-hbase4:41683] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 23:15:24,094 INFO [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer(3305): Received CLOSE for 898ed5e7258b3e0527188384fae4bfe2 2023-07-16 23:15:24,094 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 23:15:24,094 INFO [RS:0;jenkins-hbase4:38989] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 23:15:24,094 INFO [RS:0;jenkins-hbase4:38989] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 23:15:24,094 INFO [RS:0;jenkins-hbase4:38989] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:24,095 DEBUG [RS:0;jenkins-hbase4:38989] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0aed5709 to 127.0.0.1:63904 2023-07-16 23:15:24,094 INFO [RS:1;jenkins-hbase4:33913] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 23:15:24,095 DEBUG [RS:0;jenkins-hbase4:38989] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:24,095 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 23:15:24,095 INFO [RS:0;jenkins-hbase4:38989] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38989,1689549296125; all regions closed. 2023-07-16 23:15:24,095 INFO [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer(3305): Received CLOSE for dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:24,094 INFO [RS:2;jenkins-hbase4:41683] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 23:15:24,095 INFO [RS:2;jenkins-hbase4:41683] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 23:15:24,094 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 23:15:24,095 INFO [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer(3305): Received CLOSE for f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:24,095 INFO [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer(3305): Received CLOSE for 246728e01e8e564172b05cb8c4263f93 2023-07-16 23:15:24,095 INFO [RS:1;jenkins-hbase4:33913] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 23:15:24,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 898ed5e7258b3e0527188384fae4bfe2, disabling compactions & flushes 2023-07-16 23:15:24,096 INFO [RS:1;jenkins-hbase4:33913] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 23:15:24,095 INFO [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:24,095 INFO [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:24,096 DEBUG [RS:3;jenkins-hbase4:43561] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2a199d1b to 127.0.0.1:63904 2023-07-16 23:15:24,096 INFO [RS:1;jenkins-hbase4:33913] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:24,096 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. 2023-07-16 23:15:24,096 DEBUG [RS:1;jenkins-hbase4:33913] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0cb18b5c to 127.0.0.1:63904 2023-07-16 23:15:24,096 DEBUG [RS:3;jenkins-hbase4:43561] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:24,096 DEBUG [RS:1;jenkins-hbase4:33913] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:24,096 INFO [RS:3;jenkins-hbase4:43561] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 23:15:24,096 DEBUG [RS:2;jenkins-hbase4:41683] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5f627fca to 127.0.0.1:63904 2023-07-16 23:15:24,096 INFO [RS:3;jenkins-hbase4:43561] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 23:15:24,096 INFO [RS:1;jenkins-hbase4:33913] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33913,1689549296335; all regions closed. 2023-07-16 23:15:24,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. 2023-07-16 23:15:24,096 INFO [RS:3;jenkins-hbase4:43561] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 23:15:24,097 INFO [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-16 23:15:24,096 DEBUG [RS:2;jenkins-hbase4:41683] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:24,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. after waiting 0 ms 2023-07-16 23:15:24,097 INFO [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-16 23:15:24,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. 2023-07-16 23:15:24,097 DEBUG [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer(1478): Online Regions={f8c9eb4dc8325188c8ee7648ac1d3697=testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697.} 2023-07-16 23:15:24,097 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 898ed5e7258b3e0527188384fae4bfe2 1/1 column families, dataSize=27.11 KB heapSize=44.65 KB 2023-07-16 23:15:24,098 INFO [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-16 23:15:24,098 DEBUG [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer(1478): Online Regions={898ed5e7258b3e0527188384fae4bfe2=hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2., dee4450ec086e99bcaec16c3a6848eb5=unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5., 1588230740=hbase:meta,,1.1588230740, 246728e01e8e564172b05cb8c4263f93=hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93.} 2023-07-16 23:15:24,102 DEBUG [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer(1504): Waiting on 1588230740, 246728e01e8e564172b05cb8c4263f93, 898ed5e7258b3e0527188384fae4bfe2, dee4450ec086e99bcaec16c3a6848eb5 2023-07-16 23:15:24,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f8c9eb4dc8325188c8ee7648ac1d3697, disabling compactions & flushes 2023-07-16 23:15:24,103 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:24,103 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 23:15:24,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:24,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. after waiting 0 ms 2023-07-16 23:15:24,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:24,103 DEBUG [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer(1504): Waiting on f8c9eb4dc8325188c8ee7648ac1d3697 2023-07-16 23:15:24,104 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 23:15:24,104 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 23:15:24,104 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 23:15:24,104 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 23:15:24,104 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=76.59 KB heapSize=120.50 KB 2023-07-16 23:15:24,105 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:24,105 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:24,105 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:24,105 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:24,116 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/testRename/f8c9eb4dc8325188c8ee7648ac1d3697/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-16 23:15:24,117 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:24,117 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f8c9eb4dc8325188c8ee7648ac1d3697: 2023-07-16 23:15:24,117 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689549317272.f8c9eb4dc8325188c8ee7648ac1d3697. 2023-07-16 23:15:24,128 DEBUG [RS:0;jenkins-hbase4:38989] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/oldWALs 2023-07-16 23:15:24,128 INFO [RS:0;jenkins-hbase4:38989] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38989%2C1689549296125.meta:.meta(num 1689549298783) 2023-07-16 23:15:24,142 DEBUG [RS:1;jenkins-hbase4:33913] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/oldWALs 2023-07-16 23:15:24,142 INFO [RS:1;jenkins-hbase4:33913] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33913%2C1689549296335:(num 1689549298581) 2023-07-16 23:15:24,142 DEBUG [RS:1;jenkins-hbase4:33913] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:24,142 INFO [RS:1;jenkins-hbase4:33913] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:24,145 INFO [RS:1;jenkins-hbase4:33913] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 23:15:24,145 INFO [RS:1;jenkins-hbase4:33913] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 23:15:24,145 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 23:15:24,145 INFO [RS:1;jenkins-hbase4:33913] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 23:15:24,145 INFO [RS:1;jenkins-hbase4:33913] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 23:15:24,150 INFO [RS:1;jenkins-hbase4:33913] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33913 2023-07-16 23:15:24,150 DEBUG [RS:0;jenkins-hbase4:38989] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/oldWALs 2023-07-16 23:15:24,150 INFO [RS:0;jenkins-hbase4:38989] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38989%2C1689549296125:(num 1689549298581) 2023-07-16 23:15:24,150 DEBUG [RS:0;jenkins-hbase4:38989] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:24,150 INFO [RS:0;jenkins-hbase4:38989] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:24,151 INFO [RS:0;jenkins-hbase4:38989] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 23:15:24,151 INFO [RS:0;jenkins-hbase4:38989] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 23:15:24,152 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 23:15:24,152 INFO [RS:0;jenkins-hbase4:38989] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 23:15:24,152 INFO [RS:0;jenkins-hbase4:38989] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 23:15:24,156 INFO [RS:0;jenkins-hbase4:38989] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38989 2023-07-16 23:15:24,158 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:24,158 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:24,158 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:24,159 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:24,159 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:43561-0x101706ac992000b, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:24,159 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:24,159 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:43561-0x101706ac992000b, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:24,159 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33913,1689549296335 2023-07-16 23:15:24,160 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:24,161 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=70.78 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/.tmp/info/03ce18b2306c43aabf80ec7efb060713 2023-07-16 23:15:24,162 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33913,1689549296335] 2023-07-16 23:15:24,162 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33913,1689549296335; numProcessing=1 2023-07-16 23:15:24,162 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=27.11 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2/.tmp/m/b1f9f1a366d14d939e0759dcf1be38e7 2023-07-16 23:15:24,162 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:24,162 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:24,163 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:24,163 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:43561-0x101706ac992000b, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38989,1689549296125 2023-07-16 23:15:24,168 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 03ce18b2306c43aabf80ec7efb060713 2023-07-16 23:15:24,169 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b1f9f1a366d14d939e0759dcf1be38e7 2023-07-16 23:15:24,170 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2/.tmp/m/b1f9f1a366d14d939e0759dcf1be38e7 as hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2/m/b1f9f1a366d14d939e0759dcf1be38e7 2023-07-16 23:15:24,177 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b1f9f1a366d14d939e0759dcf1be38e7 2023-07-16 23:15:24,177 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2/m/b1f9f1a366d14d939e0759dcf1be38e7, entries=28, sequenceid=101, filesize=6.1 K 2023-07-16 23:15:24,178 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~27.11 KB/27756, heapSize ~44.63 KB/45704, currentSize=0 B/0 for 898ed5e7258b3e0527188384fae4bfe2 in 81ms, sequenceid=101, compaction requested=false 2023-07-16 23:15:24,183 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/.tmp/rep_barrier/4d58296b054445339a1d5c28bba62428 2023-07-16 23:15:24,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/rsgroup/898ed5e7258b3e0527188384fae4bfe2/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=12 2023-07-16 23:15:24,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 23:15:24,187 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. 2023-07-16 23:15:24,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 898ed5e7258b3e0527188384fae4bfe2: 2023-07-16 23:15:24,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689549299207.898ed5e7258b3e0527188384fae4bfe2. 2023-07-16 23:15:24,188 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dee4450ec086e99bcaec16c3a6848eb5, disabling compactions & flushes 2023-07-16 23:15:24,188 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:24,188 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:24,188 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. after waiting 0 ms 2023-07-16 23:15:24,188 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:24,191 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4d58296b054445339a1d5c28bba62428 2023-07-16 23:15:24,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/default/unmovedTable/dee4450ec086e99bcaec16c3a6848eb5/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-16 23:15:24,192 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:24,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dee4450ec086e99bcaec16c3a6848eb5: 2023-07-16 23:15:24,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689549318934.dee4450ec086e99bcaec16c3a6848eb5. 2023-07-16 23:15:24,193 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 246728e01e8e564172b05cb8c4263f93, disabling compactions & flushes 2023-07-16 23:15:24,193 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. 2023-07-16 23:15:24,193 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. 2023-07-16 23:15:24,193 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. after waiting 0 ms 2023-07-16 23:15:24,193 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. 2023-07-16 23:15:24,198 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/namespace/246728e01e8e564172b05cb8c4263f93/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-16 23:15:24,199 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. 2023-07-16 23:15:24,199 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 246728e01e8e564172b05cb8c4263f93: 2023-07-16 23:15:24,199 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689549299078.246728e01e8e564172b05cb8c4263f93. 2023-07-16 23:15:24,207 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.81 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/.tmp/table/2cf1d1e126354076a8abcd6d998aacfd 2023-07-16 23:15:24,212 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2cf1d1e126354076a8abcd6d998aacfd 2023-07-16 23:15:24,213 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/.tmp/info/03ce18b2306c43aabf80ec7efb060713 as hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/info/03ce18b2306c43aabf80ec7efb060713 2023-07-16 23:15:24,218 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 03ce18b2306c43aabf80ec7efb060713 2023-07-16 23:15:24,218 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/info/03ce18b2306c43aabf80ec7efb060713, entries=93, sequenceid=210, filesize=15.5 K 2023-07-16 23:15:24,219 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/.tmp/rep_barrier/4d58296b054445339a1d5c28bba62428 as hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/rep_barrier/4d58296b054445339a1d5c28bba62428 2023-07-16 23:15:24,225 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4d58296b054445339a1d5c28bba62428 2023-07-16 23:15:24,225 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/rep_barrier/4d58296b054445339a1d5c28bba62428, entries=18, sequenceid=210, filesize=6.9 K 2023-07-16 23:15:24,226 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/.tmp/table/2cf1d1e126354076a8abcd6d998aacfd as hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/table/2cf1d1e126354076a8abcd6d998aacfd 2023-07-16 23:15:24,231 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2cf1d1e126354076a8abcd6d998aacfd 2023-07-16 23:15:24,231 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/table/2cf1d1e126354076a8abcd6d998aacfd, entries=27, sequenceid=210, filesize=7.2 K 2023-07-16 23:15:24,232 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~76.59 KB/78427, heapSize ~120.45 KB/123344, currentSize=0 B/0 for 1588230740 in 128ms, sequenceid=210, compaction requested=false 2023-07-16 23:15:24,242 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/data/hbase/meta/1588230740/recovered.edits/213.seqid, newMaxSeqId=213, maxSeqId=18 2023-07-16 23:15:24,242 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 23:15:24,243 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 23:15:24,243 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 23:15:24,243 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-16 23:15:24,262 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:24,262 INFO [RS:1;jenkins-hbase4:33913] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33913,1689549296335; zookeeper connection closed. 2023-07-16 23:15:24,262 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:33913-0x101706ac9920002, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:24,262 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3495fd47] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3495fd47 2023-07-16 23:15:24,263 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33913,1689549296335 already deleted, retry=false 2023-07-16 23:15:24,263 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33913,1689549296335 expired; onlineServers=3 2023-07-16 23:15:24,264 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38989,1689549296125] 2023-07-16 23:15:24,264 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38989,1689549296125; numProcessing=2 2023-07-16 23:15:24,265 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38989,1689549296125 already deleted, retry=false 2023-07-16 23:15:24,265 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38989,1689549296125 expired; onlineServers=2 2023-07-16 23:15:24,282 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 23:15:24,283 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 23:15:24,283 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 23:15:24,302 INFO [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43561,1689549300217; all regions closed. 2023-07-16 23:15:24,304 INFO [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41683,1689549296507; all regions closed. 2023-07-16 23:15:24,312 DEBUG [RS:3;jenkins-hbase4:43561] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/oldWALs 2023-07-16 23:15:24,312 INFO [RS:3;jenkins-hbase4:43561] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43561%2C1689549300217.meta:.meta(num 1689549301681) 2023-07-16 23:15:24,313 DEBUG [RS:2;jenkins-hbase4:41683] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/oldWALs 2023-07-16 23:15:24,313 INFO [RS:2;jenkins-hbase4:41683] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41683%2C1689549296507:(num 1689549298581) 2023-07-16 23:15:24,313 DEBUG [RS:2;jenkins-hbase4:41683] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:24,313 INFO [RS:2;jenkins-hbase4:41683] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:24,313 INFO [RS:2;jenkins-hbase4:41683] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 23:15:24,313 INFO [RS:2;jenkins-hbase4:41683] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 23:15:24,313 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 23:15:24,313 INFO [RS:2;jenkins-hbase4:41683] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 23:15:24,314 INFO [RS:2;jenkins-hbase4:41683] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 23:15:24,314 INFO [RS:2;jenkins-hbase4:41683] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41683 2023-07-16 23:15:24,318 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:43561-0x101706ac992000b, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:24,318 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41683,1689549296507 2023-07-16 23:15:24,318 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:24,320 DEBUG [RS:3;jenkins-hbase4:43561] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/oldWALs 2023-07-16 23:15:24,320 INFO [RS:3;jenkins-hbase4:43561] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43561%2C1689549300217:(num 1689549300723) 2023-07-16 23:15:24,320 DEBUG [RS:3;jenkins-hbase4:43561] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:24,320 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41683,1689549296507] 2023-07-16 23:15:24,320 INFO [RS:3;jenkins-hbase4:43561] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:24,320 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41683,1689549296507; numProcessing=3 2023-07-16 23:15:24,320 INFO [RS:3;jenkins-hbase4:43561] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 23:15:24,321 INFO [RS:3;jenkins-hbase4:43561] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43561 2023-07-16 23:15:24,321 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 23:15:24,321 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41683,1689549296507 already deleted, retry=false 2023-07-16 23:15:24,321 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41683,1689549296507 expired; onlineServers=1 2023-07-16 23:15:24,323 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:43561-0x101706ac992000b, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43561,1689549300217 2023-07-16 23:15:24,323 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:24,325 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43561,1689549300217] 2023-07-16 23:15:24,325 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43561,1689549300217; numProcessing=4 2023-07-16 23:15:24,326 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43561,1689549300217 already deleted, retry=false 2023-07-16 23:15:24,326 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43561,1689549300217 expired; onlineServers=0 2023-07-16 23:15:24,326 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37359,1689549294108' ***** 2023-07-16 23:15:24,327 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-16 23:15:24,327 DEBUG [M:0;jenkins-hbase4:37359] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7bcfce02, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 23:15:24,327 INFO [M:0;jenkins-hbase4:37359] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 23:15:24,329 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-16 23:15:24,329 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:24,330 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 23:15:24,330 INFO [M:0;jenkins-hbase4:37359] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@55ffcf1a{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-16 23:15:24,330 INFO [M:0;jenkins-hbase4:37359] server.AbstractConnector(383): Stopped ServerConnector@2092751{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 23:15:24,330 INFO [M:0;jenkins-hbase4:37359] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 23:15:24,331 INFO [M:0;jenkins-hbase4:37359] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@34fd62ed{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 23:15:24,331 INFO [M:0;jenkins-hbase4:37359] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7410039f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/hadoop.log.dir/,STOPPED} 2023-07-16 23:15:24,332 INFO [M:0;jenkins-hbase4:37359] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37359,1689549294108 2023-07-16 23:15:24,332 INFO [M:0;jenkins-hbase4:37359] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37359,1689549294108; all regions closed. 2023-07-16 23:15:24,332 DEBUG [M:0;jenkins-hbase4:37359] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:24,332 INFO [M:0;jenkins-hbase4:37359] master.HMaster(1491): Stopping master jetty server 2023-07-16 23:15:24,333 INFO [M:0;jenkins-hbase4:37359] server.AbstractConnector(383): Stopped ServerConnector@8b592d2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 23:15:24,333 DEBUG [M:0;jenkins-hbase4:37359] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-16 23:15:24,333 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-16 23:15:24,333 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689549298098] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689549298098,5,FailOnTimeoutGroup] 2023-07-16 23:15:24,333 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689549298097] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689549298097,5,FailOnTimeoutGroup] 2023-07-16 23:15:24,333 DEBUG [M:0;jenkins-hbase4:37359] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-16 23:15:24,334 INFO [M:0;jenkins-hbase4:37359] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-16 23:15:24,334 INFO [M:0;jenkins-hbase4:37359] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-16 23:15:24,334 INFO [M:0;jenkins-hbase4:37359] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-16 23:15:24,334 DEBUG [M:0;jenkins-hbase4:37359] master.HMaster(1512): Stopping service threads 2023-07-16 23:15:24,334 INFO [M:0;jenkins-hbase4:37359] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-16 23:15:24,334 ERROR [M:0;jenkins-hbase4:37359] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-16 23:15:24,335 INFO [M:0;jenkins-hbase4:37359] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-16 23:15:24,335 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-16 23:15:24,336 DEBUG [M:0;jenkins-hbase4:37359] zookeeper.ZKUtil(398): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-16 23:15:24,336 WARN [M:0;jenkins-hbase4:37359] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-16 23:15:24,336 INFO [M:0;jenkins-hbase4:37359] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-16 23:15:24,336 INFO [M:0;jenkins-hbase4:37359] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-16 23:15:24,336 DEBUG [M:0;jenkins-hbase4:37359] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 23:15:24,336 INFO [M:0;jenkins-hbase4:37359] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:24,336 DEBUG [M:0;jenkins-hbase4:37359] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:24,336 DEBUG [M:0;jenkins-hbase4:37359] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 23:15:24,336 DEBUG [M:0;jenkins-hbase4:37359] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:24,336 INFO [M:0;jenkins-hbase4:37359] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=519.05 KB heapSize=621.13 KB 2023-07-16 23:15:24,353 INFO [M:0;jenkins-hbase4:37359] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=519.05 KB at sequenceid=1152 (bloomFilter=true), to=hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d141152194ba488db7931f2f0bfe7da4 2023-07-16 23:15:24,360 DEBUG [M:0;jenkins-hbase4:37359] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d141152194ba488db7931f2f0bfe7da4 as hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d141152194ba488db7931f2f0bfe7da4 2023-07-16 23:15:24,365 INFO [M:0;jenkins-hbase4:37359] regionserver.HStore(1080): Added hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d141152194ba488db7931f2f0bfe7da4, entries=154, sequenceid=1152, filesize=27.1 K 2023-07-16 23:15:24,366 INFO [M:0;jenkins-hbase4:37359] regionserver.HRegion(2948): Finished flush of dataSize ~519.05 KB/531512, heapSize ~621.12 KB/636024, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 30ms, sequenceid=1152, compaction requested=false 2023-07-16 23:15:24,368 INFO [M:0;jenkins-hbase4:37359] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:24,368 DEBUG [M:0;jenkins-hbase4:37359] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 23:15:24,373 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 23:15:24,373 INFO [M:0;jenkins-hbase4:37359] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-16 23:15:24,373 INFO [M:0;jenkins-hbase4:37359] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37359 2023-07-16 23:15:24,375 DEBUG [M:0;jenkins-hbase4:37359] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,37359,1689549294108 already deleted, retry=false 2023-07-16 23:15:24,762 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:24,762 INFO [M:0;jenkins-hbase4:37359] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37359,1689549294108; zookeeper connection closed. 2023-07-16 23:15:24,762 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): master:37359-0x101706ac9920000, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:24,862 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:43561-0x101706ac992000b, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:24,862 INFO [RS:3;jenkins-hbase4:43561] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43561,1689549300217; zookeeper connection closed. 2023-07-16 23:15:24,862 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:43561-0x101706ac992000b, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:24,863 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2a962c7e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2a962c7e 2023-07-16 23:15:24,963 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:24,963 INFO [RS:2;jenkins-hbase4:41683] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41683,1689549296507; zookeeper connection closed. 2023-07-16 23:15:24,963 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:41683-0x101706ac9920003, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:24,963 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@19b8f2e9] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@19b8f2e9 2023-07-16 23:15:25,063 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:25,063 INFO [RS:0;jenkins-hbase4:38989] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38989,1689549296125; zookeeper connection closed. 2023-07-16 23:15:25,063 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): regionserver:38989-0x101706ac9920001, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:25,063 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@63a8a1c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@63a8a1c 2023-07-16 23:15:25,063 INFO [Listener at localhost/40131] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-16 23:15:25,064 WARN [Listener at localhost/40131] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 23:15:25,068 INFO [Listener at localhost/40131] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 23:15:25,171 WARN [BP-1339359975-172.31.14.131-1689549290377 heartbeating to localhost/127.0.0.1:34675] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 23:15:25,171 WARN [BP-1339359975-172.31.14.131-1689549290377 heartbeating to localhost/127.0.0.1:34675] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1339359975-172.31.14.131-1689549290377 (Datanode Uuid 868ca1ba-fb9c-4bc7-9f78-8e2c4cf64012) service to localhost/127.0.0.1:34675 2023-07-16 23:15:25,173 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/cluster_b14fde1a-1c3e-bdee-d7b9-5694b71ef229/dfs/data/data5/current/BP-1339359975-172.31.14.131-1689549290377] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 23:15:25,173 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/cluster_b14fde1a-1c3e-bdee-d7b9-5694b71ef229/dfs/data/data6/current/BP-1339359975-172.31.14.131-1689549290377] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 23:15:25,174 WARN [Listener at localhost/40131] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 23:15:25,179 INFO [Listener at localhost/40131] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 23:15:25,181 WARN [BP-1339359975-172.31.14.131-1689549290377 heartbeating to localhost/127.0.0.1:34675] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 23:15:25,181 WARN [BP-1339359975-172.31.14.131-1689549290377 heartbeating to localhost/127.0.0.1:34675] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1339359975-172.31.14.131-1689549290377 (Datanode Uuid 940b67ca-7731-4a45-b3b2-b6cb647dfe14) service to localhost/127.0.0.1:34675 2023-07-16 23:15:25,182 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/cluster_b14fde1a-1c3e-bdee-d7b9-5694b71ef229/dfs/data/data3/current/BP-1339359975-172.31.14.131-1689549290377] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 23:15:25,182 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/cluster_b14fde1a-1c3e-bdee-d7b9-5694b71ef229/dfs/data/data4/current/BP-1339359975-172.31.14.131-1689549290377] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 23:15:25,184 WARN [Listener at localhost/40131] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 23:15:25,186 INFO [Listener at localhost/40131] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 23:15:25,221 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-16 23:15:25,288 WARN [BP-1339359975-172.31.14.131-1689549290377 heartbeating to localhost/127.0.0.1:34675] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 23:15:25,288 WARN [BP-1339359975-172.31.14.131-1689549290377 heartbeating to localhost/127.0.0.1:34675] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1339359975-172.31.14.131-1689549290377 (Datanode Uuid ffb69bb2-fbad-48a3-bdb3-6dbdeceec12c) service to localhost/127.0.0.1:34675 2023-07-16 23:15:25,289 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/cluster_b14fde1a-1c3e-bdee-d7b9-5694b71ef229/dfs/data/data1/current/BP-1339359975-172.31.14.131-1689549290377] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 23:15:25,289 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/cluster_b14fde1a-1c3e-bdee-d7b9-5694b71ef229/dfs/data/data2/current/BP-1339359975-172.31.14.131-1689549290377] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 23:15:25,321 INFO [Listener at localhost/40131] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 23:15:25,442 INFO [Listener at localhost/40131] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-16 23:15:25,509 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-16 23:15:25,510 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-16 23:15:25,510 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/hadoop.log.dir so I do NOT create it in target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031 2023-07-16 23:15:25,510 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/70ae4571-6163-df8f-5d4f-ad289e5f1fb4/hadoop.tmp.dir so I do NOT create it in target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031 2023-07-16 23:15:25,510 INFO [Listener at localhost/40131] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/cluster_db64f02e-055c-576e-a616-7b290e554e26, deleteOnExit=true 2023-07-16 23:15:25,510 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-16 23:15:25,510 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/test.cache.data in system properties and HBase conf 2023-07-16 23:15:25,510 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/hadoop.tmp.dir in system properties and HBase conf 2023-07-16 23:15:25,510 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/hadoop.log.dir in system properties and HBase conf 2023-07-16 23:15:25,511 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-16 23:15:25,511 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-16 23:15:25,511 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-16 23:15:25,511 DEBUG [Listener at localhost/40131] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-16 23:15:25,511 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-16 23:15:25,512 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-16 23:15:25,512 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-16 23:15:25,512 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 23:15:25,512 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-16 23:15:25,512 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-16 23:15:25,512 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 23:15:25,512 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 23:15:25,512 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-16 23:15:25,512 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/nfs.dump.dir in system properties and HBase conf 2023-07-16 23:15:25,513 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/java.io.tmpdir in system properties and HBase conf 2023-07-16 23:15:25,513 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 23:15:25,513 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-16 23:15:25,513 INFO [Listener at localhost/40131] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-16 23:15:25,518 WARN [Listener at localhost/40131] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 23:15:25,518 WARN [Listener at localhost/40131] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 23:15:25,538 DEBUG [Listener at localhost/40131-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101706ac992000a, quorum=127.0.0.1:63904, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-16 23:15:25,538 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101706ac992000a, quorum=127.0.0.1:63904, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-16 23:15:25,567 WARN [Listener at localhost/40131] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 23:15:25,570 INFO [Listener at localhost/40131] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 23:15:25,578 INFO [Listener at localhost/40131] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/java.io.tmpdir/Jetty_localhost_38651_hdfs____.fwkk21/webapp 2023-07-16 23:15:25,685 INFO [Listener at localhost/40131] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38651 2023-07-16 23:15:25,692 WARN [Listener at localhost/40131] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 23:15:25,692 WARN [Listener at localhost/40131] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 23:15:25,812 WARN [Listener at localhost/37199] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 23:15:25,851 WARN [Listener at localhost/37199] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 23:15:25,857 WARN [Listener at localhost/37199] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 23:15:25,859 INFO [Listener at localhost/37199] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 23:15:25,881 INFO [Listener at localhost/37199] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/java.io.tmpdir/Jetty_localhost_39227_datanode____.f0vwzs/webapp 2023-07-16 23:15:25,999 INFO [Listener at localhost/37199] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39227 2023-07-16 23:15:26,018 WARN [Listener at localhost/40603] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 23:15:26,066 WARN [Listener at localhost/40603] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 23:15:26,070 WARN [Listener at localhost/40603] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 23:15:26,072 INFO [Listener at localhost/40603] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 23:15:26,092 INFO [Listener at localhost/40603] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/java.io.tmpdir/Jetty_localhost_45509_datanode____.j495to/webapp 2023-07-16 23:15:26,228 INFO [Listener at localhost/40603] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45509 2023-07-16 23:15:26,255 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x763a16b242662df9: Processing first storage report for DS-83f531a4-2ecd-4ae1-9b25-7e76397fc0f8 from datanode 4da24e92-35d9-4942-a3cc-ce4d764e194a 2023-07-16 23:15:26,255 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x763a16b242662df9: from storage DS-83f531a4-2ecd-4ae1-9b25-7e76397fc0f8 node DatanodeRegistration(127.0.0.1:40369, datanodeUuid=4da24e92-35d9-4942-a3cc-ce4d764e194a, infoPort=39821, infoSecurePort=0, ipcPort=40603, storageInfo=lv=-57;cid=testClusterID;nsid=335698446;c=1689549325522), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-16 23:15:26,255 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x763a16b242662df9: Processing first storage report for DS-8e81097a-6464-42f3-859e-872dbbd5abe2 from datanode 4da24e92-35d9-4942-a3cc-ce4d764e194a 2023-07-16 23:15:26,255 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x763a16b242662df9: from storage DS-8e81097a-6464-42f3-859e-872dbbd5abe2 node DatanodeRegistration(127.0.0.1:40369, datanodeUuid=4da24e92-35d9-4942-a3cc-ce4d764e194a, infoPort=39821, infoSecurePort=0, ipcPort=40603, storageInfo=lv=-57;cid=testClusterID;nsid=335698446;c=1689549325522), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 23:15:26,260 WARN [Listener at localhost/42755] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 23:15:26,320 WARN [Listener at localhost/42755] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 23:15:26,327 WARN [Listener at localhost/42755] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 23:15:26,328 INFO [Listener at localhost/42755] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 23:15:26,333 INFO [Listener at localhost/42755] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/java.io.tmpdir/Jetty_localhost_45599_datanode____.nc3zmt/webapp 2023-07-16 23:15:26,444 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2f57235eaa207e7b: Processing first storage report for DS-f8de42be-09d2-4b02-97fa-98e1cb726ded from datanode a8ba4a90-130d-430d-820e-4642997076b1 2023-07-16 23:15:26,445 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2f57235eaa207e7b: from storage DS-f8de42be-09d2-4b02-97fa-98e1cb726ded node DatanodeRegistration(127.0.0.1:37095, datanodeUuid=a8ba4a90-130d-430d-820e-4642997076b1, infoPort=32785, infoSecurePort=0, ipcPort=42755, storageInfo=lv=-57;cid=testClusterID;nsid=335698446;c=1689549325522), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-16 23:15:26,445 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2f57235eaa207e7b: Processing first storage report for DS-076adedb-b290-49b9-956f-bff74af2b36e from datanode a8ba4a90-130d-430d-820e-4642997076b1 2023-07-16 23:15:26,445 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2f57235eaa207e7b: from storage DS-076adedb-b290-49b9-956f-bff74af2b36e node DatanodeRegistration(127.0.0.1:37095, datanodeUuid=a8ba4a90-130d-430d-820e-4642997076b1, infoPort=32785, infoSecurePort=0, ipcPort=42755, storageInfo=lv=-57;cid=testClusterID;nsid=335698446;c=1689549325522), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 23:15:26,461 INFO [Listener at localhost/42755] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45599 2023-07-16 23:15:26,479 WARN [Listener at localhost/41101] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 23:15:26,586 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbaef10baac757a3f: Processing first storage report for DS-9f6448b9-c6ab-4e32-926b-7b9b055f6dfc from datanode 30b8bad1-1b72-4fc5-9f4c-e62a1aed4e17 2023-07-16 23:15:26,586 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbaef10baac757a3f: from storage DS-9f6448b9-c6ab-4e32-926b-7b9b055f6dfc node DatanodeRegistration(127.0.0.1:38001, datanodeUuid=30b8bad1-1b72-4fc5-9f4c-e62a1aed4e17, infoPort=42351, infoSecurePort=0, ipcPort=41101, storageInfo=lv=-57;cid=testClusterID;nsid=335698446;c=1689549325522), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 23:15:26,586 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbaef10baac757a3f: Processing first storage report for DS-724406cf-acae-4a61-a65c-55734cde06f9 from datanode 30b8bad1-1b72-4fc5-9f4c-e62a1aed4e17 2023-07-16 23:15:26,586 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbaef10baac757a3f: from storage DS-724406cf-acae-4a61-a65c-55734cde06f9 node DatanodeRegistration(127.0.0.1:38001, datanodeUuid=30b8bad1-1b72-4fc5-9f4c-e62a1aed4e17, infoPort=42351, infoSecurePort=0, ipcPort=41101, storageInfo=lv=-57;cid=testClusterID;nsid=335698446;c=1689549325522), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 23:15:26,595 DEBUG [Listener at localhost/41101] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031 2023-07-16 23:15:26,599 INFO [Listener at localhost/41101] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/cluster_db64f02e-055c-576e-a616-7b290e554e26/zookeeper_0, clientPort=58149, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/cluster_db64f02e-055c-576e-a616-7b290e554e26/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/cluster_db64f02e-055c-576e-a616-7b290e554e26/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-16 23:15:26,601 INFO [Listener at localhost/41101] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=58149 2023-07-16 23:15:26,601 INFO [Listener at localhost/41101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:26,603 INFO [Listener at localhost/41101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:26,625 INFO [Listener at localhost/41101] util.FSUtils(471): Created version file at hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce with version=8 2023-07-16 23:15:26,625 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/hbase-staging 2023-07-16 23:15:26,626 DEBUG [Listener at localhost/41101] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-16 23:15:26,627 DEBUG [Listener at localhost/41101] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-16 23:15:26,627 DEBUG [Listener at localhost/41101] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-16 23:15:26,627 DEBUG [Listener at localhost/41101] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-16 23:15:26,628 INFO [Listener at localhost/41101] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 23:15:26,628 INFO [Listener at localhost/41101] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:26,628 INFO [Listener at localhost/41101] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:26,628 INFO [Listener at localhost/41101] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 23:15:26,628 INFO [Listener at localhost/41101] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:26,628 INFO [Listener at localhost/41101] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 23:15:26,628 INFO [Listener at localhost/41101] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 23:15:26,629 INFO [Listener at localhost/41101] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34891 2023-07-16 23:15:26,630 INFO [Listener at localhost/41101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:26,631 INFO [Listener at localhost/41101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:26,632 INFO [Listener at localhost/41101] zookeeper.RecoverableZooKeeper(93): Process identifier=master:34891 connecting to ZooKeeper ensemble=127.0.0.1:58149 2023-07-16 23:15:26,642 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:348910x0, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 23:15:26,643 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:34891-0x101706b4c080000 connected 2023-07-16 23:15:26,657 DEBUG [Listener at localhost/41101] zookeeper.ZKUtil(164): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 23:15:26,658 DEBUG [Listener at localhost/41101] zookeeper.ZKUtil(164): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:26,658 DEBUG [Listener at localhost/41101] zookeeper.ZKUtil(164): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 23:15:26,658 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34891 2023-07-16 23:15:26,659 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34891 2023-07-16 23:15:26,659 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34891 2023-07-16 23:15:26,659 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34891 2023-07-16 23:15:26,660 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34891 2023-07-16 23:15:26,662 INFO [Listener at localhost/41101] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 23:15:26,662 INFO [Listener at localhost/41101] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 23:15:26,662 INFO [Listener at localhost/41101] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 23:15:26,662 INFO [Listener at localhost/41101] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-16 23:15:26,663 INFO [Listener at localhost/41101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 23:15:26,663 INFO [Listener at localhost/41101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 23:15:26,663 INFO [Listener at localhost/41101] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 23:15:26,663 INFO [Listener at localhost/41101] http.HttpServer(1146): Jetty bound to port 42619 2023-07-16 23:15:26,663 INFO [Listener at localhost/41101] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 23:15:26,665 INFO [Listener at localhost/41101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:26,666 INFO [Listener at localhost/41101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@487288d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/hadoop.log.dir/,AVAILABLE} 2023-07-16 23:15:26,666 INFO [Listener at localhost/41101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:26,666 INFO [Listener at localhost/41101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7e549292{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 23:15:26,783 INFO [Listener at localhost/41101] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 23:15:26,784 INFO [Listener at localhost/41101] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 23:15:26,784 INFO [Listener at localhost/41101] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 23:15:26,785 INFO [Listener at localhost/41101] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 23:15:26,785 INFO [Listener at localhost/41101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:26,787 INFO [Listener at localhost/41101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4fa3572f{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/java.io.tmpdir/jetty-0_0_0_0-42619-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5071239047576010887/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-16 23:15:26,788 INFO [Listener at localhost/41101] server.AbstractConnector(333): Started ServerConnector@21b7a099{HTTP/1.1, (http/1.1)}{0.0.0.0:42619} 2023-07-16 23:15:26,788 INFO [Listener at localhost/41101] server.Server(415): Started @38412ms 2023-07-16 23:15:26,788 INFO [Listener at localhost/41101] master.HMaster(444): hbase.rootdir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce, hbase.cluster.distributed=false 2023-07-16 23:15:26,803 INFO [Listener at localhost/41101] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 23:15:26,803 INFO [Listener at localhost/41101] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:26,803 INFO [Listener at localhost/41101] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:26,803 INFO [Listener at localhost/41101] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 23:15:26,803 INFO [Listener at localhost/41101] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:26,803 INFO [Listener at localhost/41101] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 23:15:26,803 INFO [Listener at localhost/41101] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 23:15:26,804 INFO [Listener at localhost/41101] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36383 2023-07-16 23:15:26,804 INFO [Listener at localhost/41101] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 23:15:26,806 DEBUG [Listener at localhost/41101] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 23:15:26,806 INFO [Listener at localhost/41101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:26,807 INFO [Listener at localhost/41101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:26,808 INFO [Listener at localhost/41101] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36383 connecting to ZooKeeper ensemble=127.0.0.1:58149 2023-07-16 23:15:26,813 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:363830x0, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 23:15:26,814 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36383-0x101706b4c080001 connected 2023-07-16 23:15:26,814 DEBUG [Listener at localhost/41101] zookeeper.ZKUtil(164): regionserver:36383-0x101706b4c080001, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 23:15:26,815 DEBUG [Listener at localhost/41101] zookeeper.ZKUtil(164): regionserver:36383-0x101706b4c080001, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:26,815 DEBUG [Listener at localhost/41101] zookeeper.ZKUtil(164): regionserver:36383-0x101706b4c080001, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 23:15:26,819 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36383 2023-07-16 23:15:26,821 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36383 2023-07-16 23:15:26,821 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36383 2023-07-16 23:15:26,824 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36383 2023-07-16 23:15:26,824 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36383 2023-07-16 23:15:26,827 INFO [Listener at localhost/41101] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 23:15:26,827 INFO [Listener at localhost/41101] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 23:15:26,827 INFO [Listener at localhost/41101] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 23:15:26,828 INFO [Listener at localhost/41101] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 23:15:26,828 INFO [Listener at localhost/41101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 23:15:26,828 INFO [Listener at localhost/41101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 23:15:26,828 INFO [Listener at localhost/41101] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 23:15:26,830 INFO [Listener at localhost/41101] http.HttpServer(1146): Jetty bound to port 40721 2023-07-16 23:15:26,830 INFO [Listener at localhost/41101] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 23:15:26,851 INFO [Listener at localhost/41101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:26,851 INFO [Listener at localhost/41101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4c0a0bbf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/hadoop.log.dir/,AVAILABLE} 2023-07-16 23:15:26,851 INFO [Listener at localhost/41101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:26,852 INFO [Listener at localhost/41101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3e51c5f8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 23:15:26,978 INFO [Listener at localhost/41101] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 23:15:26,979 INFO [Listener at localhost/41101] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 23:15:26,979 INFO [Listener at localhost/41101] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 23:15:26,980 INFO [Listener at localhost/41101] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 23:15:26,981 INFO [Listener at localhost/41101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:26,982 INFO [Listener at localhost/41101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@701b0f6c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/java.io.tmpdir/jetty-0_0_0_0-40721-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2516852957240506248/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:15:26,983 INFO [Listener at localhost/41101] server.AbstractConnector(333): Started ServerConnector@24873ead{HTTP/1.1, (http/1.1)}{0.0.0.0:40721} 2023-07-16 23:15:26,984 INFO [Listener at localhost/41101] server.Server(415): Started @38608ms 2023-07-16 23:15:26,997 INFO [Listener at localhost/41101] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 23:15:26,998 INFO [Listener at localhost/41101] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:26,998 INFO [Listener at localhost/41101] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:26,998 INFO [Listener at localhost/41101] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 23:15:26,998 INFO [Listener at localhost/41101] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:26,998 INFO [Listener at localhost/41101] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 23:15:26,998 INFO [Listener at localhost/41101] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 23:15:26,999 INFO [Listener at localhost/41101] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35699 2023-07-16 23:15:27,000 INFO [Listener at localhost/41101] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 23:15:27,002 DEBUG [Listener at localhost/41101] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 23:15:27,003 INFO [Listener at localhost/41101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:27,005 INFO [Listener at localhost/41101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:27,006 INFO [Listener at localhost/41101] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35699 connecting to ZooKeeper ensemble=127.0.0.1:58149 2023-07-16 23:15:27,010 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:356990x0, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 23:15:27,011 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35699-0x101706b4c080002 connected 2023-07-16 23:15:27,011 DEBUG [Listener at localhost/41101] zookeeper.ZKUtil(164): regionserver:35699-0x101706b4c080002, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 23:15:27,012 DEBUG [Listener at localhost/41101] zookeeper.ZKUtil(164): regionserver:35699-0x101706b4c080002, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:27,012 DEBUG [Listener at localhost/41101] zookeeper.ZKUtil(164): regionserver:35699-0x101706b4c080002, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 23:15:27,013 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35699 2023-07-16 23:15:27,013 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35699 2023-07-16 23:15:27,013 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35699 2023-07-16 23:15:27,013 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35699 2023-07-16 23:15:27,013 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35699 2023-07-16 23:15:27,016 INFO [Listener at localhost/41101] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 23:15:27,016 INFO [Listener at localhost/41101] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 23:15:27,016 INFO [Listener at localhost/41101] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 23:15:27,016 INFO [Listener at localhost/41101] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 23:15:27,017 INFO [Listener at localhost/41101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 23:15:27,017 INFO [Listener at localhost/41101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 23:15:27,017 INFO [Listener at localhost/41101] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 23:15:27,017 INFO [Listener at localhost/41101] http.HttpServer(1146): Jetty bound to port 42301 2023-07-16 23:15:27,018 INFO [Listener at localhost/41101] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 23:15:27,019 INFO [Listener at localhost/41101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:27,019 INFO [Listener at localhost/41101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@42c43c67{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/hadoop.log.dir/,AVAILABLE} 2023-07-16 23:15:27,019 INFO [Listener at localhost/41101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:27,019 INFO [Listener at localhost/41101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3ab57fef{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 23:15:27,138 INFO [Listener at localhost/41101] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 23:15:27,139 INFO [Listener at localhost/41101] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 23:15:27,139 INFO [Listener at localhost/41101] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 23:15:27,139 INFO [Listener at localhost/41101] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 23:15:27,140 INFO [Listener at localhost/41101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:27,141 INFO [Listener at localhost/41101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3dbde6d6{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/java.io.tmpdir/jetty-0_0_0_0-42301-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2663270701706838763/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:15:27,142 INFO [Listener at localhost/41101] server.AbstractConnector(333): Started ServerConnector@35126f63{HTTP/1.1, (http/1.1)}{0.0.0.0:42301} 2023-07-16 23:15:27,142 INFO [Listener at localhost/41101] server.Server(415): Started @38766ms 2023-07-16 23:15:27,156 INFO [Listener at localhost/41101] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 23:15:27,156 INFO [Listener at localhost/41101] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:27,157 INFO [Listener at localhost/41101] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:27,157 INFO [Listener at localhost/41101] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 23:15:27,157 INFO [Listener at localhost/41101] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:27,157 INFO [Listener at localhost/41101] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 23:15:27,157 INFO [Listener at localhost/41101] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 23:15:27,158 INFO [Listener at localhost/41101] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33393 2023-07-16 23:15:27,159 INFO [Listener at localhost/41101] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 23:15:27,160 DEBUG [Listener at localhost/41101] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 23:15:27,161 INFO [Listener at localhost/41101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:27,162 INFO [Listener at localhost/41101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:27,164 INFO [Listener at localhost/41101] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33393 connecting to ZooKeeper ensemble=127.0.0.1:58149 2023-07-16 23:15:27,167 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:333930x0, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 23:15:27,169 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33393-0x101706b4c080003 connected 2023-07-16 23:15:27,169 DEBUG [Listener at localhost/41101] zookeeper.ZKUtil(164): regionserver:33393-0x101706b4c080003, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 23:15:27,170 DEBUG [Listener at localhost/41101] zookeeper.ZKUtil(164): regionserver:33393-0x101706b4c080003, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:27,170 DEBUG [Listener at localhost/41101] zookeeper.ZKUtil(164): regionserver:33393-0x101706b4c080003, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 23:15:27,171 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33393 2023-07-16 23:15:27,171 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33393 2023-07-16 23:15:27,171 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33393 2023-07-16 23:15:27,171 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33393 2023-07-16 23:15:27,172 DEBUG [Listener at localhost/41101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33393 2023-07-16 23:15:27,173 INFO [Listener at localhost/41101] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 23:15:27,174 INFO [Listener at localhost/41101] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 23:15:27,174 INFO [Listener at localhost/41101] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 23:15:27,174 INFO [Listener at localhost/41101] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 23:15:27,174 INFO [Listener at localhost/41101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 23:15:27,174 INFO [Listener at localhost/41101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 23:15:27,175 INFO [Listener at localhost/41101] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 23:15:27,175 INFO [Listener at localhost/41101] http.HttpServer(1146): Jetty bound to port 40221 2023-07-16 23:15:27,175 INFO [Listener at localhost/41101] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 23:15:27,179 INFO [Listener at localhost/41101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:27,181 INFO [Listener at localhost/41101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@50cc44fc{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/hadoop.log.dir/,AVAILABLE} 2023-07-16 23:15:27,182 INFO [Listener at localhost/41101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:27,182 INFO [Listener at localhost/41101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2ce09518{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 23:15:27,310 INFO [Listener at localhost/41101] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 23:15:27,311 INFO [Listener at localhost/41101] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 23:15:27,311 INFO [Listener at localhost/41101] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 23:15:27,312 INFO [Listener at localhost/41101] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 23:15:27,312 INFO [Listener at localhost/41101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:27,313 INFO [Listener at localhost/41101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5732b517{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/java.io.tmpdir/jetty-0_0_0_0-40221-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8371588515184758796/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:15:27,315 INFO [Listener at localhost/41101] server.AbstractConnector(333): Started ServerConnector@4046bffa{HTTP/1.1, (http/1.1)}{0.0.0.0:40221} 2023-07-16 23:15:27,315 INFO [Listener at localhost/41101] server.Server(415): Started @38939ms 2023-07-16 23:15:27,322 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 23:15:27,326 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@19e622dc{HTTP/1.1, (http/1.1)}{0.0.0.0:44181} 2023-07-16 23:15:27,326 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @38950ms 2023-07-16 23:15:27,326 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,34891,1689549326627 2023-07-16 23:15:27,327 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 23:15:27,328 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,34891,1689549326627 2023-07-16 23:15:27,329 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 23:15:27,329 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:35699-0x101706b4c080002, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 23:15:27,330 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:27,329 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:36383-0x101706b4c080001, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 23:15:27,329 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:33393-0x101706b4c080003, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 23:15:27,331 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 23:15:27,333 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,34891,1689549326627 from backup master directory 2023-07-16 23:15:27,333 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 23:15:27,334 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,34891,1689549326627 2023-07-16 23:15:27,334 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 23:15:27,334 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 23:15:27,335 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,34891,1689549326627 2023-07-16 23:15:27,356 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/hbase.id with ID: 208a4658-d3b5-4ea4-a25d-7bc30e4404b7 2023-07-16 23:15:27,368 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:27,372 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:27,383 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x0a911219 to 127.0.0.1:58149 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:15:27,386 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@29dfcb3b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:15:27,386 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:27,429 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-16 23:15:27,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 23:15:27,435 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/MasterData/data/master/store-tmp 2023-07-16 23:15:27,445 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:27,445 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 23:15:27,445 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:27,445 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:27,445 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 23:15:27,445 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:27,445 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:27,445 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 23:15:27,445 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/MasterData/WALs/jenkins-hbase4.apache.org,34891,1689549326627 2023-07-16 23:15:27,448 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34891%2C1689549326627, suffix=, logDir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/MasterData/WALs/jenkins-hbase4.apache.org,34891,1689549326627, archiveDir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/MasterData/oldWALs, maxLogs=10 2023-07-16 23:15:27,464 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38001,DS-9f6448b9-c6ab-4e32-926b-7b9b055f6dfc,DISK] 2023-07-16 23:15:27,467 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40369,DS-83f531a4-2ecd-4ae1-9b25-7e76397fc0f8,DISK] 2023-07-16 23:15:27,467 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37095,DS-f8de42be-09d2-4b02-97fa-98e1cb726ded,DISK] 2023-07-16 23:15:27,475 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/MasterData/WALs/jenkins-hbase4.apache.org,34891,1689549326627/jenkins-hbase4.apache.org%2C34891%2C1689549326627.1689549327448 2023-07-16 23:15:27,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38001,DS-9f6448b9-c6ab-4e32-926b-7b9b055f6dfc,DISK], DatanodeInfoWithStorage[127.0.0.1:37095,DS-f8de42be-09d2-4b02-97fa-98e1cb726ded,DISK], DatanodeInfoWithStorage[127.0.0.1:40369,DS-83f531a4-2ecd-4ae1-9b25-7e76397fc0f8,DISK]] 2023-07-16 23:15:27,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:27,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:27,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 23:15:27,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 23:15:27,478 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-16 23:15:27,480 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-16 23:15:27,480 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-16 23:15:27,481 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:27,482 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 23:15:27,482 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 23:15:27,486 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 23:15:27,488 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:27,489 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11813459200, jitterRate=0.10021412372589111}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:27,489 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 23:15:27,490 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-16 23:15:27,492 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-16 23:15:27,492 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-16 23:15:27,492 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-16 23:15:27,493 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-16 23:15:27,493 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-16 23:15:27,493 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-16 23:15:27,494 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-16 23:15:27,495 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-16 23:15:27,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-16 23:15:27,496 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-16 23:15:27,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-16 23:15:27,498 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:27,499 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-16 23:15:27,499 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-16 23:15:27,500 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-16 23:15:27,501 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:27,501 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:36383-0x101706b4c080001, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:27,501 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:35699-0x101706b4c080002, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:27,501 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:33393-0x101706b4c080003, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:27,501 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:27,502 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,34891,1689549326627, sessionid=0x101706b4c080000, setting cluster-up flag (Was=false) 2023-07-16 23:15:27,507 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:27,511 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-16 23:15:27,512 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34891,1689549326627 2023-07-16 23:15:27,517 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:27,522 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-16 23:15:27,523 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34891,1689549326627 2023-07-16 23:15:27,524 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.hbase-snapshot/.tmp 2023-07-16 23:15:27,526 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-16 23:15:27,526 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-16 23:15:27,528 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-16 23:15:27,528 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34891,1689549326627] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 23:15:27,528 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-16 23:15:27,529 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-16 23:15:27,530 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-16 23:15:27,531 INFO [RS:0;jenkins-hbase4:36383] regionserver.HRegionServer(951): ClusterId : 208a4658-d3b5-4ea4-a25d-7bc30e4404b7 2023-07-16 23:15:27,531 INFO [RS:2;jenkins-hbase4:33393] regionserver.HRegionServer(951): ClusterId : 208a4658-d3b5-4ea4-a25d-7bc30e4404b7 2023-07-16 23:15:27,532 DEBUG [RS:0;jenkins-hbase4:36383] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 23:15:27,533 DEBUG [RS:2;jenkins-hbase4:33393] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 23:15:27,533 INFO [RS:1;jenkins-hbase4:35699] regionserver.HRegionServer(951): ClusterId : 208a4658-d3b5-4ea4-a25d-7bc30e4404b7 2023-07-16 23:15:27,535 DEBUG [RS:1;jenkins-hbase4:35699] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 23:15:27,541 DEBUG [RS:2;jenkins-hbase4:33393] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 23:15:27,541 DEBUG [RS:2;jenkins-hbase4:33393] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 23:15:27,541 DEBUG [RS:0;jenkins-hbase4:36383] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 23:15:27,541 DEBUG [RS:0;jenkins-hbase4:36383] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 23:15:27,541 DEBUG [RS:1;jenkins-hbase4:35699] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 23:15:27,542 DEBUG [RS:1;jenkins-hbase4:35699] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 23:15:27,543 DEBUG [RS:2;jenkins-hbase4:33393] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 23:15:27,544 DEBUG [RS:1;jenkins-hbase4:35699] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 23:15:27,547 DEBUG [RS:0;jenkins-hbase4:36383] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 23:15:27,547 DEBUG [RS:1;jenkins-hbase4:35699] zookeeper.ReadOnlyZKClient(139): Connect 0x58fdc025 to 127.0.0.1:58149 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:15:27,547 DEBUG [RS:2;jenkins-hbase4:33393] zookeeper.ReadOnlyZKClient(139): Connect 0x3aa5a540 to 127.0.0.1:58149 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:15:27,548 DEBUG [RS:0;jenkins-hbase4:36383] zookeeper.ReadOnlyZKClient(139): Connect 0x021929e9 to 127.0.0.1:58149 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:15:27,557 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 23:15:27,560 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 23:15:27,562 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 23:15:27,562 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 23:15:27,562 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 23:15:27,562 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 23:15:27,562 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 23:15:27,562 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 23:15:27,562 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-16 23:15:27,562 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,562 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 23:15:27,562 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,564 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689549357564 2023-07-16 23:15:27,564 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-16 23:15:27,564 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-16 23:15:27,564 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-16 23:15:27,565 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-16 23:15:27,565 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-16 23:15:27,565 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-16 23:15:27,565 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,565 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 23:15:27,566 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-16 23:15:27,566 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-16 23:15:27,566 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-16 23:15:27,566 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-16 23:15:27,566 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-16 23:15:27,566 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-16 23:15:27,566 DEBUG [RS:2;jenkins-hbase4:33393] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@34624c64, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:15:27,566 DEBUG [RS:2;jenkins-hbase4:33393] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@305dce30, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 23:15:27,567 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:27,571 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689549327566,5,FailOnTimeoutGroup] 2023-07-16 23:15:27,571 DEBUG [RS:0;jenkins-hbase4:36383] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@8e14299, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:15:27,572 DEBUG [RS:0;jenkins-hbase4:36383] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4de3cde1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 23:15:27,575 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689549327575,5,FailOnTimeoutGroup] 2023-07-16 23:15:27,575 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,575 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-16 23:15:27,575 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,575 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,575 DEBUG [RS:1;jenkins-hbase4:35699] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@563a254a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:15:27,576 DEBUG [RS:1;jenkins-hbase4:35699] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5205ab55, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 23:15:27,582 DEBUG [RS:2;jenkins-hbase4:33393] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:33393 2023-07-16 23:15:27,582 INFO [RS:2;jenkins-hbase4:33393] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 23:15:27,582 INFO [RS:2;jenkins-hbase4:33393] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 23:15:27,582 DEBUG [RS:2;jenkins-hbase4:33393] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 23:15:27,582 INFO [RS:2;jenkins-hbase4:33393] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34891,1689549326627 with isa=jenkins-hbase4.apache.org/172.31.14.131:33393, startcode=1689549327156 2023-07-16 23:15:27,583 DEBUG [RS:2;jenkins-hbase4:33393] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 23:15:27,586 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56625, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 23:15:27,586 DEBUG [RS:0;jenkins-hbase4:36383] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:36383 2023-07-16 23:15:27,588 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34891] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33393,1689549327156 2023-07-16 23:15:27,586 DEBUG [RS:1;jenkins-hbase4:35699] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:35699 2023-07-16 23:15:27,588 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34891,1689549326627] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 23:15:27,588 INFO [RS:1;jenkins-hbase4:35699] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 23:15:27,588 INFO [RS:1;jenkins-hbase4:35699] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 23:15:27,588 DEBUG [RS:1;jenkins-hbase4:35699] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 23:15:27,588 INFO [RS:0;jenkins-hbase4:36383] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 23:15:27,588 INFO [RS:0;jenkins-hbase4:36383] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 23:15:27,588 DEBUG [RS:0;jenkins-hbase4:36383] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 23:15:27,588 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34891,1689549326627] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-16 23:15:27,589 DEBUG [RS:2;jenkins-hbase4:33393] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce 2023-07-16 23:15:27,589 DEBUG [RS:2;jenkins-hbase4:33393] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37199 2023-07-16 23:15:27,589 DEBUG [RS:2;jenkins-hbase4:33393] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42619 2023-07-16 23:15:27,589 INFO [RS:1;jenkins-hbase4:35699] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34891,1689549326627 with isa=jenkins-hbase4.apache.org/172.31.14.131:35699, startcode=1689549326997 2023-07-16 23:15:27,589 INFO [RS:0;jenkins-hbase4:36383] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34891,1689549326627 with isa=jenkins-hbase4.apache.org/172.31.14.131:36383, startcode=1689549326802 2023-07-16 23:15:27,589 DEBUG [RS:0;jenkins-hbase4:36383] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 23:15:27,589 DEBUG [RS:1;jenkins-hbase4:35699] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 23:15:27,590 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:27,591 DEBUG [RS:2;jenkins-hbase4:33393] zookeeper.ZKUtil(162): regionserver:33393-0x101706b4c080003, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33393,1689549327156 2023-07-16 23:15:27,591 WARN [RS:2;jenkins-hbase4:33393] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 23:15:27,591 INFO [RS:2;jenkins-hbase4:33393] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 23:15:27,591 DEBUG [RS:2;jenkins-hbase4:33393] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/WALs/jenkins-hbase4.apache.org,33393,1689549327156 2023-07-16 23:15:27,591 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34101, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 23:15:27,591 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38179, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 23:15:27,591 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34891] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36383,1689549326802 2023-07-16 23:15:27,592 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34891,1689549326627] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 23:15:27,592 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34891,1689549326627] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-16 23:15:27,592 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34891] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35699,1689549326997 2023-07-16 23:15:27,592 DEBUG [RS:0;jenkins-hbase4:36383] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce 2023-07-16 23:15:27,592 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34891,1689549326627] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 23:15:27,592 DEBUG [RS:0;jenkins-hbase4:36383] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37199 2023-07-16 23:15:27,592 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34891,1689549326627] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-16 23:15:27,592 DEBUG [RS:0;jenkins-hbase4:36383] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42619 2023-07-16 23:15:27,592 DEBUG [RS:1;jenkins-hbase4:35699] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce 2023-07-16 23:15:27,592 DEBUG [RS:1;jenkins-hbase4:35699] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37199 2023-07-16 23:15:27,592 DEBUG [RS:1;jenkins-hbase4:35699] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42619 2023-07-16 23:15:27,600 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36383,1689549326802] 2023-07-16 23:15:27,600 DEBUG [RS:0;jenkins-hbase4:36383] zookeeper.ZKUtil(162): regionserver:36383-0x101706b4c080001, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36383,1689549326802 2023-07-16 23:15:27,600 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33393,1689549327156] 2023-07-16 23:15:27,600 WARN [RS:0;jenkins-hbase4:36383] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 23:15:27,600 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35699,1689549326997] 2023-07-16 23:15:27,600 DEBUG [RS:1;jenkins-hbase4:35699] zookeeper.ZKUtil(162): regionserver:35699-0x101706b4c080002, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35699,1689549326997 2023-07-16 23:15:27,600 INFO [RS:0;jenkins-hbase4:36383] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 23:15:27,600 WARN [RS:1;jenkins-hbase4:35699] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 23:15:27,600 DEBUG [RS:0;jenkins-hbase4:36383] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/WALs/jenkins-hbase4.apache.org,36383,1689549326802 2023-07-16 23:15:27,600 INFO [RS:1;jenkins-hbase4:35699] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 23:15:27,600 DEBUG [RS:1;jenkins-hbase4:35699] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/WALs/jenkins-hbase4.apache.org,35699,1689549326997 2023-07-16 23:15:27,609 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 23:15:27,611 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 23:15:27,611 DEBUG [RS:2;jenkins-hbase4:33393] zookeeper.ZKUtil(162): regionserver:33393-0x101706b4c080003, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36383,1689549326802 2023-07-16 23:15:27,611 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce 2023-07-16 23:15:27,611 DEBUG [RS:0;jenkins-hbase4:36383] zookeeper.ZKUtil(162): regionserver:36383-0x101706b4c080001, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36383,1689549326802 2023-07-16 23:15:27,611 DEBUG [RS:2;jenkins-hbase4:33393] zookeeper.ZKUtil(162): regionserver:33393-0x101706b4c080003, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35699,1689549326997 2023-07-16 23:15:27,612 DEBUG [RS:0;jenkins-hbase4:36383] zookeeper.ZKUtil(162): regionserver:36383-0x101706b4c080001, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35699,1689549326997 2023-07-16 23:15:27,612 DEBUG [RS:2;jenkins-hbase4:33393] zookeeper.ZKUtil(162): regionserver:33393-0x101706b4c080003, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33393,1689549327156 2023-07-16 23:15:27,612 DEBUG [RS:0;jenkins-hbase4:36383] zookeeper.ZKUtil(162): regionserver:36383-0x101706b4c080001, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33393,1689549327156 2023-07-16 23:15:27,613 DEBUG [RS:2;jenkins-hbase4:33393] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 23:15:27,613 INFO [RS:2;jenkins-hbase4:33393] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 23:15:27,613 DEBUG [RS:0;jenkins-hbase4:36383] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 23:15:27,614 INFO [RS:0;jenkins-hbase4:36383] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 23:15:27,614 INFO [RS:2;jenkins-hbase4:33393] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 23:15:27,614 INFO [RS:2;jenkins-hbase4:33393] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 23:15:27,614 INFO [RS:2;jenkins-hbase4:33393] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,614 INFO [RS:2;jenkins-hbase4:33393] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 23:15:27,615 INFO [RS:0;jenkins-hbase4:36383] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 23:15:27,615 INFO [RS:2;jenkins-hbase4:33393] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,616 INFO [RS:0;jenkins-hbase4:36383] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 23:15:27,616 INFO [RS:0;jenkins-hbase4:36383] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,616 DEBUG [RS:2;jenkins-hbase4:33393] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,616 DEBUG [RS:2;jenkins-hbase4:33393] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,616 INFO [RS:0;jenkins-hbase4:36383] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 23:15:27,616 DEBUG [RS:2;jenkins-hbase4:33393] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,616 DEBUG [RS:2;jenkins-hbase4:33393] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,616 DEBUG [RS:2;jenkins-hbase4:33393] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,616 DEBUG [RS:1;jenkins-hbase4:35699] zookeeper.ZKUtil(162): regionserver:35699-0x101706b4c080002, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36383,1689549326802 2023-07-16 23:15:27,616 DEBUG [RS:2;jenkins-hbase4:33393] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 23:15:27,616 DEBUG [RS:2;jenkins-hbase4:33393] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,616 DEBUG [RS:2;jenkins-hbase4:33393] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,617 DEBUG [RS:2;jenkins-hbase4:33393] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,617 DEBUG [RS:2;jenkins-hbase4:33393] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,617 DEBUG [RS:1;jenkins-hbase4:35699] zookeeper.ZKUtil(162): regionserver:35699-0x101706b4c080002, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35699,1689549326997 2023-07-16 23:15:27,617 DEBUG [RS:1;jenkins-hbase4:35699] zookeeper.ZKUtil(162): regionserver:35699-0x101706b4c080002, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33393,1689549327156 2023-07-16 23:15:27,617 INFO [RS:2;jenkins-hbase4:33393] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,617 INFO [RS:0;jenkins-hbase4:36383] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,617 INFO [RS:2;jenkins-hbase4:33393] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,618 DEBUG [RS:0;jenkins-hbase4:36383] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,618 INFO [RS:2;jenkins-hbase4:33393] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,618 DEBUG [RS:0;jenkins-hbase4:36383] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,618 INFO [RS:2;jenkins-hbase4:33393] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,618 DEBUG [RS:0;jenkins-hbase4:36383] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,618 DEBUG [RS:0;jenkins-hbase4:36383] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,618 DEBUG [RS:0;jenkins-hbase4:36383] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,618 DEBUG [RS:0;jenkins-hbase4:36383] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 23:15:27,618 DEBUG [RS:1;jenkins-hbase4:35699] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 23:15:27,618 DEBUG [RS:0;jenkins-hbase4:36383] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,618 DEBUG [RS:0;jenkins-hbase4:36383] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,618 INFO [RS:1;jenkins-hbase4:35699] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 23:15:27,618 DEBUG [RS:0;jenkins-hbase4:36383] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,618 DEBUG [RS:0;jenkins-hbase4:36383] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,623 INFO [RS:1;jenkins-hbase4:35699] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 23:15:27,624 INFO [RS:1;jenkins-hbase4:35699] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 23:15:27,624 INFO [RS:0;jenkins-hbase4:36383] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,624 INFO [RS:1;jenkins-hbase4:35699] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,624 INFO [RS:0;jenkins-hbase4:36383] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,624 INFO [RS:0;jenkins-hbase4:36383] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,624 INFO [RS:0;jenkins-hbase4:36383] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,624 INFO [RS:1;jenkins-hbase4:35699] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 23:15:27,626 INFO [RS:1;jenkins-hbase4:35699] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,626 DEBUG [RS:1;jenkins-hbase4:35699] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,627 DEBUG [RS:1;jenkins-hbase4:35699] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,627 DEBUG [RS:1;jenkins-hbase4:35699] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,627 DEBUG [RS:1;jenkins-hbase4:35699] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,627 DEBUG [RS:1;jenkins-hbase4:35699] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,627 DEBUG [RS:1;jenkins-hbase4:35699] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 23:15:27,627 DEBUG [RS:1;jenkins-hbase4:35699] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,627 DEBUG [RS:1;jenkins-hbase4:35699] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,627 DEBUG [RS:1;jenkins-hbase4:35699] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,627 DEBUG [RS:1;jenkins-hbase4:35699] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:27,632 INFO [RS:2;jenkins-hbase4:33393] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 23:15:27,632 INFO [RS:2;jenkins-hbase4:33393] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33393,1689549327156-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,633 INFO [RS:1;jenkins-hbase4:35699] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,634 INFO [RS:1;jenkins-hbase4:35699] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,634 INFO [RS:1;jenkins-hbase4:35699] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,634 INFO [RS:1;jenkins-hbase4:35699] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,634 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:27,635 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 23:15:27,637 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/info 2023-07-16 23:15:27,638 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 23:15:27,638 INFO [RS:0;jenkins-hbase4:36383] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 23:15:27,638 INFO [RS:0;jenkins-hbase4:36383] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36383,1689549326802-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,638 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:27,639 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 23:15:27,640 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/rep_barrier 2023-07-16 23:15:27,640 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 23:15:27,640 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:27,641 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 23:15:27,642 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/table 2023-07-16 23:15:27,642 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 23:15:27,643 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:27,644 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740 2023-07-16 23:15:27,644 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740 2023-07-16 23:15:27,645 INFO [RS:2;jenkins-hbase4:33393] regionserver.Replication(203): jenkins-hbase4.apache.org,33393,1689549327156 started 2023-07-16 23:15:27,645 INFO [RS:2;jenkins-hbase4:33393] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33393,1689549327156, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33393, sessionid=0x101706b4c080003 2023-07-16 23:15:27,645 DEBUG [RS:2;jenkins-hbase4:33393] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 23:15:27,645 DEBUG [RS:2;jenkins-hbase4:33393] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33393,1689549327156 2023-07-16 23:15:27,645 DEBUG [RS:2;jenkins-hbase4:33393] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33393,1689549327156' 2023-07-16 23:15:27,645 DEBUG [RS:2;jenkins-hbase4:33393] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 23:15:27,646 DEBUG [RS:2;jenkins-hbase4:33393] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 23:15:27,646 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 23:15:27,646 INFO [RS:1;jenkins-hbase4:35699] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 23:15:27,646 DEBUG [RS:2;jenkins-hbase4:33393] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 23:15:27,646 INFO [RS:1;jenkins-hbase4:35699] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35699,1689549326997-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,647 DEBUG [RS:2;jenkins-hbase4:33393] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 23:15:27,647 DEBUG [RS:2;jenkins-hbase4:33393] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33393,1689549327156 2023-07-16 23:15:27,647 DEBUG [RS:2;jenkins-hbase4:33393] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33393,1689549327156' 2023-07-16 23:15:27,647 DEBUG [RS:2;jenkins-hbase4:33393] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 23:15:27,647 DEBUG [RS:2;jenkins-hbase4:33393] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 23:15:27,648 DEBUG [RS:2;jenkins-hbase4:33393] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 23:15:27,648 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 23:15:27,648 INFO [RS:2;jenkins-hbase4:33393] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-16 23:15:27,651 INFO [RS:2;jenkins-hbase4:33393] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,652 DEBUG [RS:2;jenkins-hbase4:33393] zookeeper.ZKUtil(398): regionserver:33393-0x101706b4c080003, quorum=127.0.0.1:58149, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-16 23:15:27,652 INFO [RS:2;jenkins-hbase4:33393] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-16 23:15:27,653 INFO [RS:2;jenkins-hbase4:33393] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,653 INFO [RS:2;jenkins-hbase4:33393] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,654 INFO [RS:0;jenkins-hbase4:36383] regionserver.Replication(203): jenkins-hbase4.apache.org,36383,1689549326802 started 2023-07-16 23:15:27,654 INFO [RS:0;jenkins-hbase4:36383] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36383,1689549326802, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36383, sessionid=0x101706b4c080001 2023-07-16 23:15:27,654 DEBUG [RS:0;jenkins-hbase4:36383] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 23:15:27,654 DEBUG [RS:0;jenkins-hbase4:36383] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36383,1689549326802 2023-07-16 23:15:27,654 DEBUG [RS:0;jenkins-hbase4:36383] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36383,1689549326802' 2023-07-16 23:15:27,654 DEBUG [RS:0;jenkins-hbase4:36383] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 23:15:27,655 DEBUG [RS:0;jenkins-hbase4:36383] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 23:15:27,655 DEBUG [RS:0;jenkins-hbase4:36383] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 23:15:27,655 DEBUG [RS:0;jenkins-hbase4:36383] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 23:15:27,655 DEBUG [RS:0;jenkins-hbase4:36383] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36383,1689549326802 2023-07-16 23:15:27,655 DEBUG [RS:0;jenkins-hbase4:36383] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36383,1689549326802' 2023-07-16 23:15:27,655 DEBUG [RS:0;jenkins-hbase4:36383] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 23:15:27,656 DEBUG [RS:0;jenkins-hbase4:36383] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 23:15:27,656 DEBUG [RS:0;jenkins-hbase4:36383] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 23:15:27,656 INFO [RS:0;jenkins-hbase4:36383] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-16 23:15:27,656 INFO [RS:0;jenkins-hbase4:36383] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,656 DEBUG [RS:0;jenkins-hbase4:36383] zookeeper.ZKUtil(398): regionserver:36383-0x101706b4c080001, quorum=127.0.0.1:58149, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-16 23:15:27,656 INFO [RS:0;jenkins-hbase4:36383] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-16 23:15:27,656 INFO [RS:0;jenkins-hbase4:36383] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,656 INFO [RS:0;jenkins-hbase4:36383] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,658 INFO [RS:1;jenkins-hbase4:35699] regionserver.Replication(203): jenkins-hbase4.apache.org,35699,1689549326997 started 2023-07-16 23:15:27,658 INFO [RS:1;jenkins-hbase4:35699] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35699,1689549326997, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35699, sessionid=0x101706b4c080002 2023-07-16 23:15:27,658 DEBUG [RS:1;jenkins-hbase4:35699] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 23:15:27,658 DEBUG [RS:1;jenkins-hbase4:35699] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35699,1689549326997 2023-07-16 23:15:27,658 DEBUG [RS:1;jenkins-hbase4:35699] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35699,1689549326997' 2023-07-16 23:15:27,658 DEBUG [RS:1;jenkins-hbase4:35699] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 23:15:27,658 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:27,659 DEBUG [RS:1;jenkins-hbase4:35699] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 23:15:27,659 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9846514240, jitterRate=-0.08297190070152283}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 23:15:27,659 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 23:15:27,659 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 23:15:27,659 DEBUG [RS:1;jenkins-hbase4:35699] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 23:15:27,659 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 23:15:27,659 DEBUG [RS:1;jenkins-hbase4:35699] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 23:15:27,659 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 23:15:27,659 DEBUG [RS:1;jenkins-hbase4:35699] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35699,1689549326997 2023-07-16 23:15:27,659 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 23:15:27,660 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 23:15:27,659 DEBUG [RS:1;jenkins-hbase4:35699] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35699,1689549326997' 2023-07-16 23:15:27,660 DEBUG [RS:1;jenkins-hbase4:35699] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 23:15:27,660 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 23:15:27,660 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 23:15:27,660 DEBUG [RS:1;jenkins-hbase4:35699] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 23:15:27,660 DEBUG [RS:1;jenkins-hbase4:35699] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 23:15:27,660 INFO [RS:1;jenkins-hbase4:35699] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-16 23:15:27,660 INFO [RS:1;jenkins-hbase4:35699] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,661 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 23:15:27,661 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-16 23:15:27,661 DEBUG [RS:1;jenkins-hbase4:35699] zookeeper.ZKUtil(398): regionserver:35699-0x101706b4c080002, quorum=127.0.0.1:58149, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-16 23:15:27,661 INFO [RS:1;jenkins-hbase4:35699] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-16 23:15:27,661 INFO [RS:1;jenkins-hbase4:35699] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,661 INFO [RS:1;jenkins-hbase4:35699] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:27,661 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-16 23:15:27,662 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-16 23:15:27,663 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-16 23:15:27,756 INFO [RS:2;jenkins-hbase4:33393] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33393%2C1689549327156, suffix=, logDir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/WALs/jenkins-hbase4.apache.org,33393,1689549327156, archiveDir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/oldWALs, maxLogs=32 2023-07-16 23:15:27,758 INFO [RS:0;jenkins-hbase4:36383] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36383%2C1689549326802, suffix=, logDir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/WALs/jenkins-hbase4.apache.org,36383,1689549326802, archiveDir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/oldWALs, maxLogs=32 2023-07-16 23:15:27,763 INFO [RS:1;jenkins-hbase4:35699] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35699%2C1689549326997, suffix=, logDir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/WALs/jenkins-hbase4.apache.org,35699,1689549326997, archiveDir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/oldWALs, maxLogs=32 2023-07-16 23:15:27,775 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38001,DS-9f6448b9-c6ab-4e32-926b-7b9b055f6dfc,DISK] 2023-07-16 23:15:27,775 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37095,DS-f8de42be-09d2-4b02-97fa-98e1cb726ded,DISK] 2023-07-16 23:15:27,775 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40369,DS-83f531a4-2ecd-4ae1-9b25-7e76397fc0f8,DISK] 2023-07-16 23:15:27,777 INFO [RS:2;jenkins-hbase4:33393] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/WALs/jenkins-hbase4.apache.org,33393,1689549327156/jenkins-hbase4.apache.org%2C33393%2C1689549327156.1689549327757 2023-07-16 23:15:27,777 DEBUG [RS:2;jenkins-hbase4:33393] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38001,DS-9f6448b9-c6ab-4e32-926b-7b9b055f6dfc,DISK], DatanodeInfoWithStorage[127.0.0.1:37095,DS-f8de42be-09d2-4b02-97fa-98e1cb726ded,DISK], DatanodeInfoWithStorage[127.0.0.1:40369,DS-83f531a4-2ecd-4ae1-9b25-7e76397fc0f8,DISK]] 2023-07-16 23:15:27,785 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40369,DS-83f531a4-2ecd-4ae1-9b25-7e76397fc0f8,DISK] 2023-07-16 23:15:27,785 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38001,DS-9f6448b9-c6ab-4e32-926b-7b9b055f6dfc,DISK] 2023-07-16 23:15:27,785 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40369,DS-83f531a4-2ecd-4ae1-9b25-7e76397fc0f8,DISK] 2023-07-16 23:15:27,786 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37095,DS-f8de42be-09d2-4b02-97fa-98e1cb726ded,DISK] 2023-07-16 23:15:27,786 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37095,DS-f8de42be-09d2-4b02-97fa-98e1cb726ded,DISK] 2023-07-16 23:15:27,786 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38001,DS-9f6448b9-c6ab-4e32-926b-7b9b055f6dfc,DISK] 2023-07-16 23:15:27,788 INFO [RS:0;jenkins-hbase4:36383] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/WALs/jenkins-hbase4.apache.org,36383,1689549326802/jenkins-hbase4.apache.org%2C36383%2C1689549326802.1689549327759 2023-07-16 23:15:27,788 INFO [RS:1;jenkins-hbase4:35699] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/WALs/jenkins-hbase4.apache.org,35699,1689549326997/jenkins-hbase4.apache.org%2C35699%2C1689549326997.1689549327763 2023-07-16 23:15:27,788 DEBUG [RS:0;jenkins-hbase4:36383] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40369,DS-83f531a4-2ecd-4ae1-9b25-7e76397fc0f8,DISK], DatanodeInfoWithStorage[127.0.0.1:37095,DS-f8de42be-09d2-4b02-97fa-98e1cb726ded,DISK], DatanodeInfoWithStorage[127.0.0.1:38001,DS-9f6448b9-c6ab-4e32-926b-7b9b055f6dfc,DISK]] 2023-07-16 23:15:27,788 DEBUG [RS:1;jenkins-hbase4:35699] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38001,DS-9f6448b9-c6ab-4e32-926b-7b9b055f6dfc,DISK], DatanodeInfoWithStorage[127.0.0.1:37095,DS-f8de42be-09d2-4b02-97fa-98e1cb726ded,DISK], DatanodeInfoWithStorage[127.0.0.1:40369,DS-83f531a4-2ecd-4ae1-9b25-7e76397fc0f8,DISK]] 2023-07-16 23:15:27,813 DEBUG [jenkins-hbase4:34891] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-16 23:15:27,814 DEBUG [jenkins-hbase4:34891] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:27,814 DEBUG [jenkins-hbase4:34891] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:27,814 DEBUG [jenkins-hbase4:34891] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:27,814 DEBUG [jenkins-hbase4:34891] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:15:27,814 DEBUG [jenkins-hbase4:34891] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:27,815 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33393,1689549327156, state=OPENING 2023-07-16 23:15:27,816 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-16 23:15:27,817 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:27,820 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33393,1689549327156}] 2023-07-16 23:15:27,820 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 23:15:27,839 WARN [ReadOnlyZKClient-127.0.0.1:58149@0x0a911219] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-16 23:15:27,840 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34891,1689549326627] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 23:15:27,841 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34784, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 23:15:27,842 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33393] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:34784 deadline: 1689549387841, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,33393,1689549327156 2023-07-16 23:15:27,973 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33393,1689549327156 2023-07-16 23:15:27,975 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 23:15:27,977 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34796, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 23:15:27,981 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-16 23:15:27,981 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 23:15:27,983 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33393%2C1689549327156.meta, suffix=.meta, logDir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/WALs/jenkins-hbase4.apache.org,33393,1689549327156, archiveDir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/oldWALs, maxLogs=32 2023-07-16 23:15:27,998 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38001,DS-9f6448b9-c6ab-4e32-926b-7b9b055f6dfc,DISK] 2023-07-16 23:15:27,998 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40369,DS-83f531a4-2ecd-4ae1-9b25-7e76397fc0f8,DISK] 2023-07-16 23:15:27,998 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37095,DS-f8de42be-09d2-4b02-97fa-98e1cb726ded,DISK] 2023-07-16 23:15:28,000 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/WALs/jenkins-hbase4.apache.org,33393,1689549327156/jenkins-hbase4.apache.org%2C33393%2C1689549327156.meta.1689549327983.meta 2023-07-16 23:15:28,003 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40369,DS-83f531a4-2ecd-4ae1-9b25-7e76397fc0f8,DISK], DatanodeInfoWithStorage[127.0.0.1:38001,DS-9f6448b9-c6ab-4e32-926b-7b9b055f6dfc,DISK], DatanodeInfoWithStorage[127.0.0.1:37095,DS-f8de42be-09d2-4b02-97fa-98e1cb726ded,DISK]] 2023-07-16 23:15:28,003 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:28,003 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 23:15:28,003 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-16 23:15:28,003 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-16 23:15:28,003 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-16 23:15:28,003 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:28,003 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-16 23:15:28,003 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-16 23:15:28,004 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 23:15:28,005 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/info 2023-07-16 23:15:28,005 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/info 2023-07-16 23:15:28,006 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 23:15:28,006 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:28,006 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 23:15:28,007 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/rep_barrier 2023-07-16 23:15:28,007 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/rep_barrier 2023-07-16 23:15:28,007 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 23:15:28,008 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:28,008 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 23:15:28,008 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/table 2023-07-16 23:15:28,009 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/table 2023-07-16 23:15:28,009 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 23:15:28,009 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:28,010 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740 2023-07-16 23:15:28,011 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740 2023-07-16 23:15:28,013 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 23:15:28,014 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 23:15:28,014 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12044447840, jitterRate=0.1217266172170639}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 23:15:28,014 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 23:15:28,015 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689549327973 2023-07-16 23:15:28,019 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-16 23:15:28,020 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-16 23:15:28,020 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33393,1689549327156, state=OPEN 2023-07-16 23:15:28,022 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 23:15:28,022 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 23:15:28,024 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-16 23:15:28,024 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33393,1689549327156 in 204 msec 2023-07-16 23:15:28,025 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-16 23:15:28,025 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 363 msec 2023-07-16 23:15:28,027 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 497 msec 2023-07-16 23:15:28,027 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689549328027, completionTime=-1 2023-07-16 23:15:28,027 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-16 23:15:28,027 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-16 23:15:28,030 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-16 23:15:28,030 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689549388030 2023-07-16 23:15:28,030 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689549448030 2023-07-16 23:15:28,030 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-16 23:15:28,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34891,1689549326627-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:28,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34891,1689549326627-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:28,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34891,1689549326627-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:28,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:34891, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:28,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:28,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-16 23:15:28,036 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:28,036 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-16 23:15:28,036 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-16 23:15:28,037 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 23:15:28,038 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 23:15:28,039 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/hbase/namespace/0aed32643bffb6f94e0618f4c1e0e0dd 2023-07-16 23:15:28,040 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/hbase/namespace/0aed32643bffb6f94e0618f4c1e0e0dd empty. 2023-07-16 23:15:28,040 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/hbase/namespace/0aed32643bffb6f94e0618f4c1e0e0dd 2023-07-16 23:15:28,040 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-16 23:15:28,053 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-16 23:15:28,054 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0aed32643bffb6f94e0618f4c1e0e0dd, NAME => 'hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp 2023-07-16 23:15:28,063 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:28,063 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 0aed32643bffb6f94e0618f4c1e0e0dd, disabling compactions & flushes 2023-07-16 23:15:28,063 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd. 2023-07-16 23:15:28,063 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd. 2023-07-16 23:15:28,063 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd. after waiting 0 ms 2023-07-16 23:15:28,063 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd. 2023-07-16 23:15:28,063 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd. 2023-07-16 23:15:28,063 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 0aed32643bffb6f94e0618f4c1e0e0dd: 2023-07-16 23:15:28,065 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 23:15:28,066 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689549328066"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549328066"}]},"ts":"1689549328066"} 2023-07-16 23:15:28,068 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 23:15:28,069 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 23:15:28,069 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549328069"}]},"ts":"1689549328069"} 2023-07-16 23:15:28,070 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-16 23:15:28,074 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:28,074 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:28,074 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:28,074 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:15:28,074 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:28,074 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=0aed32643bffb6f94e0618f4c1e0e0dd, ASSIGN}] 2023-07-16 23:15:28,076 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=0aed32643bffb6f94e0618f4c1e0e0dd, ASSIGN 2023-07-16 23:15:28,077 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=0aed32643bffb6f94e0618f4c1e0e0dd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36383,1689549326802; forceNewPlan=false, retain=false 2023-07-16 23:15:28,145 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34891,1689549326627] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:28,146 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34891,1689549326627] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-16 23:15:28,148 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 23:15:28,148 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 23:15:28,150 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/hbase/rsgroup/c1d021208cb108667616329e0059c9ec 2023-07-16 23:15:28,151 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/hbase/rsgroup/c1d021208cb108667616329e0059c9ec empty. 2023-07-16 23:15:28,151 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/hbase/rsgroup/c1d021208cb108667616329e0059c9ec 2023-07-16 23:15:28,151 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-16 23:15:28,164 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-16 23:15:28,165 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => c1d021208cb108667616329e0059c9ec, NAME => 'hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp 2023-07-16 23:15:28,174 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:28,174 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing c1d021208cb108667616329e0059c9ec, disabling compactions & flushes 2023-07-16 23:15:28,174 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec. 2023-07-16 23:15:28,175 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec. 2023-07-16 23:15:28,175 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec. after waiting 0 ms 2023-07-16 23:15:28,175 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec. 2023-07-16 23:15:28,175 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec. 2023-07-16 23:15:28,175 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for c1d021208cb108667616329e0059c9ec: 2023-07-16 23:15:28,177 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 23:15:28,178 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689549328177"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549328177"}]},"ts":"1689549328177"} 2023-07-16 23:15:28,179 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 23:15:28,179 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 23:15:28,180 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549328179"}]},"ts":"1689549328179"} 2023-07-16 23:15:28,180 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-16 23:15:28,183 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:28,183 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:28,183 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:28,183 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:15:28,183 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:28,184 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=c1d021208cb108667616329e0059c9ec, ASSIGN}] 2023-07-16 23:15:28,184 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=c1d021208cb108667616329e0059c9ec, ASSIGN 2023-07-16 23:15:28,185 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=c1d021208cb108667616329e0059c9ec, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33393,1689549327156; forceNewPlan=false, retain=false 2023-07-16 23:15:28,185 INFO [jenkins-hbase4:34891] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-16 23:15:28,187 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=0aed32643bffb6f94e0618f4c1e0e0dd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36383,1689549326802 2023-07-16 23:15:28,187 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689549328187"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549328187"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549328187"}]},"ts":"1689549328187"} 2023-07-16 23:15:28,187 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=c1d021208cb108667616329e0059c9ec, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33393,1689549327156 2023-07-16 23:15:28,188 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689549328187"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549328187"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549328187"}]},"ts":"1689549328187"} 2023-07-16 23:15:28,188 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 0aed32643bffb6f94e0618f4c1e0e0dd, server=jenkins-hbase4.apache.org,36383,1689549326802}] 2023-07-16 23:15:28,189 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure c1d021208cb108667616329e0059c9ec, server=jenkins-hbase4.apache.org,33393,1689549327156}] 2023-07-16 23:15:28,341 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36383,1689549326802 2023-07-16 23:15:28,341 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 23:15:28,343 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38152, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 23:15:28,347 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec. 2023-07-16 23:15:28,347 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c1d021208cb108667616329e0059c9ec, NAME => 'hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:28,348 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 23:15:28,348 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec. service=MultiRowMutationService 2023-07-16 23:15:28,348 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-16 23:15:28,348 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup c1d021208cb108667616329e0059c9ec 2023-07-16 23:15:28,348 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:28,348 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c1d021208cb108667616329e0059c9ec 2023-07-16 23:15:28,348 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c1d021208cb108667616329e0059c9ec 2023-07-16 23:15:28,348 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd. 2023-07-16 23:15:28,348 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0aed32643bffb6f94e0618f4c1e0e0dd, NAME => 'hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:28,349 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 0aed32643bffb6f94e0618f4c1e0e0dd 2023-07-16 23:15:28,349 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:28,349 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0aed32643bffb6f94e0618f4c1e0e0dd 2023-07-16 23:15:28,349 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0aed32643bffb6f94e0618f4c1e0e0dd 2023-07-16 23:15:28,349 INFO [StoreOpener-c1d021208cb108667616329e0059c9ec-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region c1d021208cb108667616329e0059c9ec 2023-07-16 23:15:28,350 INFO [StoreOpener-0aed32643bffb6f94e0618f4c1e0e0dd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 0aed32643bffb6f94e0618f4c1e0e0dd 2023-07-16 23:15:28,351 DEBUG [StoreOpener-c1d021208cb108667616329e0059c9ec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/rsgroup/c1d021208cb108667616329e0059c9ec/m 2023-07-16 23:15:28,351 DEBUG [StoreOpener-c1d021208cb108667616329e0059c9ec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/rsgroup/c1d021208cb108667616329e0059c9ec/m 2023-07-16 23:15:28,351 DEBUG [StoreOpener-0aed32643bffb6f94e0618f4c1e0e0dd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/namespace/0aed32643bffb6f94e0618f4c1e0e0dd/info 2023-07-16 23:15:28,351 DEBUG [StoreOpener-0aed32643bffb6f94e0618f4c1e0e0dd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/namespace/0aed32643bffb6f94e0618f4c1e0e0dd/info 2023-07-16 23:15:28,351 INFO [StoreOpener-c1d021208cb108667616329e0059c9ec-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c1d021208cb108667616329e0059c9ec columnFamilyName m 2023-07-16 23:15:28,352 INFO [StoreOpener-0aed32643bffb6f94e0618f4c1e0e0dd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0aed32643bffb6f94e0618f4c1e0e0dd columnFamilyName info 2023-07-16 23:15:28,352 INFO [StoreOpener-c1d021208cb108667616329e0059c9ec-1] regionserver.HStore(310): Store=c1d021208cb108667616329e0059c9ec/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:28,352 INFO [StoreOpener-0aed32643bffb6f94e0618f4c1e0e0dd-1] regionserver.HStore(310): Store=0aed32643bffb6f94e0618f4c1e0e0dd/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:28,353 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/rsgroup/c1d021208cb108667616329e0059c9ec 2023-07-16 23:15:28,354 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/rsgroup/c1d021208cb108667616329e0059c9ec 2023-07-16 23:15:28,355 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/namespace/0aed32643bffb6f94e0618f4c1e0e0dd 2023-07-16 23:15:28,355 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/namespace/0aed32643bffb6f94e0618f4c1e0e0dd 2023-07-16 23:15:28,357 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c1d021208cb108667616329e0059c9ec 2023-07-16 23:15:28,359 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0aed32643bffb6f94e0618f4c1e0e0dd 2023-07-16 23:15:28,359 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/rsgroup/c1d021208cb108667616329e0059c9ec/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:28,359 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c1d021208cb108667616329e0059c9ec; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@71e867e0, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:28,360 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c1d021208cb108667616329e0059c9ec: 2023-07-16 23:15:28,360 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec., pid=9, masterSystemTime=1689549328341 2023-07-16 23:15:28,362 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/namespace/0aed32643bffb6f94e0618f4c1e0e0dd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:28,364 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0aed32643bffb6f94e0618f4c1e0e0dd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10641396320, jitterRate=-0.008942738175392151}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:28,364 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0aed32643bffb6f94e0618f4c1e0e0dd: 2023-07-16 23:15:28,365 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd., pid=8, masterSystemTime=1689549328341 2023-07-16 23:15:28,365 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec. 2023-07-16 23:15:28,367 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec. 2023-07-16 23:15:28,369 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=c1d021208cb108667616329e0059c9ec, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33393,1689549327156 2023-07-16 23:15:28,369 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689549328369"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549328369"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549328369"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549328369"}]},"ts":"1689549328369"} 2023-07-16 23:15:28,370 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd. 2023-07-16 23:15:28,371 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd. 2023-07-16 23:15:28,371 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=0aed32643bffb6f94e0618f4c1e0e0dd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36383,1689549326802 2023-07-16 23:15:28,371 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689549328371"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549328371"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549328371"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549328371"}]},"ts":"1689549328371"} 2023-07-16 23:15:28,374 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-16 23:15:28,374 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure c1d021208cb108667616329e0059c9ec, server=jenkins-hbase4.apache.org,33393,1689549327156 in 183 msec 2023-07-16 23:15:28,375 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-16 23:15:28,375 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 0aed32643bffb6f94e0618f4c1e0e0dd, server=jenkins-hbase4.apache.org,36383,1689549326802 in 185 msec 2023-07-16 23:15:28,376 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-16 23:15:28,376 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=c1d021208cb108667616329e0059c9ec, ASSIGN in 190 msec 2023-07-16 23:15:28,376 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 23:15:28,376 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549328376"}]},"ts":"1689549328376"} 2023-07-16 23:15:28,377 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-16 23:15:28,377 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=0aed32643bffb6f94e0618f4c1e0e0dd, ASSIGN in 301 msec 2023-07-16 23:15:28,377 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 23:15:28,377 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549328377"}]},"ts":"1689549328377"} 2023-07-16 23:15:28,378 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-16 23:15:28,378 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-16 23:15:28,380 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 23:15:28,381 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 235 msec 2023-07-16 23:15:28,382 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 23:15:28,383 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 346 msec 2023-07-16 23:15:28,437 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-16 23:15:28,439 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-16 23:15:28,439 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:28,442 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 23:15:28,444 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38162, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 23:15:28,449 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-16 23:15:28,450 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34891,1689549326627] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-16 23:15:28,450 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34891,1689549326627] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-16 23:15:28,454 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:28,454 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34891,1689549326627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:28,455 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 23:15:28,458 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34891,1689549326627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 23:15:28,459 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-07-16 23:15:28,459 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34891,1689549326627] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-16 23:15:28,470 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-16 23:15:28,477 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 23:15:28,480 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-07-16 23:15:28,484 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-16 23:15:28,487 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-16 23:15:28,487 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.152sec 2023-07-16 23:15:28,487 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-16 23:15:28,487 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:28,488 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-16 23:15:28,488 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-16 23:15:28,490 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 23:15:28,490 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 23:15:28,491 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-16 23:15:28,492 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/hbase/quota/066ccc40978a29d1c807c1a979b942ea 2023-07-16 23:15:28,493 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/hbase/quota/066ccc40978a29d1c807c1a979b942ea empty. 2023-07-16 23:15:28,493 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/hbase/quota/066ccc40978a29d1c807c1a979b942ea 2023-07-16 23:15:28,493 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-16 23:15:28,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-16 23:15:28,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-16 23:15:28,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:28,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:28,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-16 23:15:28,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-16 23:15:28,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34891,1689549326627-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-16 23:15:28,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34891,1689549326627-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-16 23:15:28,501 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-16 23:15:28,506 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-16 23:15:28,508 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 066ccc40978a29d1c807c1a979b942ea, NAME => 'hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp 2023-07-16 23:15:28,515 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:28,516 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 066ccc40978a29d1c807c1a979b942ea, disabling compactions & flushes 2023-07-16 23:15:28,516 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea. 2023-07-16 23:15:28,516 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea. 2023-07-16 23:15:28,516 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea. after waiting 0 ms 2023-07-16 23:15:28,516 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea. 2023-07-16 23:15:28,516 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea. 2023-07-16 23:15:28,516 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 066ccc40978a29d1c807c1a979b942ea: 2023-07-16 23:15:28,518 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 23:15:28,519 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689549328519"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549328519"}]},"ts":"1689549328519"} 2023-07-16 23:15:28,520 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 23:15:28,521 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 23:15:28,521 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549328521"}]},"ts":"1689549328521"} 2023-07-16 23:15:28,522 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-16 23:15:28,524 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:28,524 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:28,524 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:28,524 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:15:28,524 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:28,525 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=066ccc40978a29d1c807c1a979b942ea, ASSIGN}] 2023-07-16 23:15:28,525 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=066ccc40978a29d1c807c1a979b942ea, ASSIGN 2023-07-16 23:15:28,526 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=066ccc40978a29d1c807c1a979b942ea, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35699,1689549326997; forceNewPlan=false, retain=false 2023-07-16 23:15:28,531 DEBUG [Listener at localhost/41101] zookeeper.ReadOnlyZKClient(139): Connect 0x2a4e8813 to 127.0.0.1:58149 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:15:28,536 DEBUG [Listener at localhost/41101] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@57f67347, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:15:28,537 DEBUG [hconnection-0x5d6a3efb-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 23:15:28,539 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34806, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 23:15:28,540 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,34891,1689549326627 2023-07-16 23:15:28,540 INFO [Listener at localhost/41101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:28,542 DEBUG [Listener at localhost/41101] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-16 23:15:28,544 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49086, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-16 23:15:28,547 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-16 23:15:28,547 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:28,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-16 23:15:28,548 DEBUG [Listener at localhost/41101] zookeeper.ReadOnlyZKClient(139): Connect 0x256a9099 to 127.0.0.1:58149 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:15:28,552 DEBUG [Listener at localhost/41101] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@297bef29, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:15:28,553 INFO [Listener at localhost/41101] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:58149 2023-07-16 23:15:28,555 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 23:15:28,556 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101706b4c08000a connected 2023-07-16 23:15:28,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-16 23:15:28,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-16 23:15:28,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-16 23:15:28,571 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 23:15:28,574 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 14 msec 2023-07-16 23:15:28,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-16 23:15:28,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:28,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-16 23:15:28,674 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 23:15:28,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-16 23:15:28,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-16 23:15:28,676 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:28,676 INFO [jenkins-hbase4:34891] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 23:15:28,677 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=066ccc40978a29d1c807c1a979b942ea, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35699,1689549326997 2023-07-16 23:15:28,677 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 23:15:28,678 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689549328677"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549328677"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549328677"}]},"ts":"1689549328677"} 2023-07-16 23:15:28,679 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 066ccc40978a29d1c807c1a979b942ea, server=jenkins-hbase4.apache.org,35699,1689549326997}] 2023-07-16 23:15:28,679 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 23:15:28,681 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/np1/table1/6468ba98939015b1bded5772ddb13347 2023-07-16 23:15:28,681 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/np1/table1/6468ba98939015b1bded5772ddb13347 empty. 2023-07-16 23:15:28,682 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/np1/table1/6468ba98939015b1bded5772ddb13347 2023-07-16 23:15:28,682 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-16 23:15:28,702 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-16 23:15:28,703 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6468ba98939015b1bded5772ddb13347, NAME => 'np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp 2023-07-16 23:15:28,713 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:28,714 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 6468ba98939015b1bded5772ddb13347, disabling compactions & flushes 2023-07-16 23:15:28,714 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347. 2023-07-16 23:15:28,714 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347. 2023-07-16 23:15:28,714 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347. after waiting 0 ms 2023-07-16 23:15:28,714 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347. 2023-07-16 23:15:28,714 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347. 2023-07-16 23:15:28,714 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 6468ba98939015b1bded5772ddb13347: 2023-07-16 23:15:28,716 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 23:15:28,717 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689549328717"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549328717"}]},"ts":"1689549328717"} 2023-07-16 23:15:28,718 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 23:15:28,719 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 23:15:28,719 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549328719"}]},"ts":"1689549328719"} 2023-07-16 23:15:28,720 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-16 23:15:28,724 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:28,724 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:28,724 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:28,724 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:15:28,724 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:28,724 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=6468ba98939015b1bded5772ddb13347, ASSIGN}] 2023-07-16 23:15:28,725 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=6468ba98939015b1bded5772ddb13347, ASSIGN 2023-07-16 23:15:28,726 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=6468ba98939015b1bded5772ddb13347, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35699,1689549326997; forceNewPlan=false, retain=false 2023-07-16 23:15:28,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-16 23:15:28,831 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35699,1689549326997 2023-07-16 23:15:28,832 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 23:15:28,833 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53926, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 23:15:28,845 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea. 2023-07-16 23:15:28,846 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 066ccc40978a29d1c807c1a979b942ea, NAME => 'hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:28,846 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 066ccc40978a29d1c807c1a979b942ea 2023-07-16 23:15:28,846 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:28,846 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 066ccc40978a29d1c807c1a979b942ea 2023-07-16 23:15:28,846 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 066ccc40978a29d1c807c1a979b942ea 2023-07-16 23:15:28,848 INFO [StoreOpener-066ccc40978a29d1c807c1a979b942ea-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 066ccc40978a29d1c807c1a979b942ea 2023-07-16 23:15:28,850 DEBUG [StoreOpener-066ccc40978a29d1c807c1a979b942ea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/quota/066ccc40978a29d1c807c1a979b942ea/q 2023-07-16 23:15:28,850 DEBUG [StoreOpener-066ccc40978a29d1c807c1a979b942ea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/quota/066ccc40978a29d1c807c1a979b942ea/q 2023-07-16 23:15:28,850 INFO [StoreOpener-066ccc40978a29d1c807c1a979b942ea-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 066ccc40978a29d1c807c1a979b942ea columnFamilyName q 2023-07-16 23:15:28,850 INFO [StoreOpener-066ccc40978a29d1c807c1a979b942ea-1] regionserver.HStore(310): Store=066ccc40978a29d1c807c1a979b942ea/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:28,851 INFO [StoreOpener-066ccc40978a29d1c807c1a979b942ea-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 066ccc40978a29d1c807c1a979b942ea 2023-07-16 23:15:28,852 DEBUG [StoreOpener-066ccc40978a29d1c807c1a979b942ea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/quota/066ccc40978a29d1c807c1a979b942ea/u 2023-07-16 23:15:28,852 DEBUG [StoreOpener-066ccc40978a29d1c807c1a979b942ea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/quota/066ccc40978a29d1c807c1a979b942ea/u 2023-07-16 23:15:28,852 INFO [StoreOpener-066ccc40978a29d1c807c1a979b942ea-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 066ccc40978a29d1c807c1a979b942ea columnFamilyName u 2023-07-16 23:15:28,853 INFO [StoreOpener-066ccc40978a29d1c807c1a979b942ea-1] regionserver.HStore(310): Store=066ccc40978a29d1c807c1a979b942ea/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:28,854 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/quota/066ccc40978a29d1c807c1a979b942ea 2023-07-16 23:15:28,854 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/quota/066ccc40978a29d1c807c1a979b942ea 2023-07-16 23:15:28,856 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-16 23:15:28,857 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 066ccc40978a29d1c807c1a979b942ea 2023-07-16 23:15:28,863 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/quota/066ccc40978a29d1c807c1a979b942ea/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:28,863 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 066ccc40978a29d1c807c1a979b942ea; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11997732000, jitterRate=0.11737586557865143}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-16 23:15:28,863 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 066ccc40978a29d1c807c1a979b942ea: 2023-07-16 23:15:28,864 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea., pid=16, masterSystemTime=1689549328831 2023-07-16 23:15:28,868 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea. 2023-07-16 23:15:28,869 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea. 2023-07-16 23:15:28,869 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=066ccc40978a29d1c807c1a979b942ea, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35699,1689549326997 2023-07-16 23:15:28,869 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689549328869"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549328869"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549328869"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549328869"}]},"ts":"1689549328869"} 2023-07-16 23:15:28,873 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-16 23:15:28,873 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 066ccc40978a29d1c807c1a979b942ea, server=jenkins-hbase4.apache.org,35699,1689549326997 in 192 msec 2023-07-16 23:15:28,874 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-16 23:15:28,875 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=066ccc40978a29d1c807c1a979b942ea, ASSIGN in 348 msec 2023-07-16 23:15:28,875 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 23:15:28,875 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549328875"}]},"ts":"1689549328875"} 2023-07-16 23:15:28,876 INFO [jenkins-hbase4:34891] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 23:15:28,877 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=6468ba98939015b1bded5772ddb13347, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35699,1689549326997 2023-07-16 23:15:28,877 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689549328877"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549328877"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549328877"}]},"ts":"1689549328877"} 2023-07-16 23:15:28,877 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-16 23:15:28,880 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 23:15:28,881 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 393 msec 2023-07-16 23:15:28,885 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 6468ba98939015b1bded5772ddb13347, server=jenkins-hbase4.apache.org,35699,1689549326997}] 2023-07-16 23:15:28,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-16 23:15:29,040 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347. 2023-07-16 23:15:29,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6468ba98939015b1bded5772ddb13347, NAME => 'np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:29,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 6468ba98939015b1bded5772ddb13347 2023-07-16 23:15:29,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:29,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6468ba98939015b1bded5772ddb13347 2023-07-16 23:15:29,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6468ba98939015b1bded5772ddb13347 2023-07-16 23:15:29,042 INFO [StoreOpener-6468ba98939015b1bded5772ddb13347-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 6468ba98939015b1bded5772ddb13347 2023-07-16 23:15:29,044 DEBUG [StoreOpener-6468ba98939015b1bded5772ddb13347-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/np1/table1/6468ba98939015b1bded5772ddb13347/fam1 2023-07-16 23:15:29,044 DEBUG [StoreOpener-6468ba98939015b1bded5772ddb13347-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/np1/table1/6468ba98939015b1bded5772ddb13347/fam1 2023-07-16 23:15:29,044 INFO [StoreOpener-6468ba98939015b1bded5772ddb13347-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6468ba98939015b1bded5772ddb13347 columnFamilyName fam1 2023-07-16 23:15:29,045 INFO [StoreOpener-6468ba98939015b1bded5772ddb13347-1] regionserver.HStore(310): Store=6468ba98939015b1bded5772ddb13347/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:29,045 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/np1/table1/6468ba98939015b1bded5772ddb13347 2023-07-16 23:15:29,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/np1/table1/6468ba98939015b1bded5772ddb13347 2023-07-16 23:15:29,048 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6468ba98939015b1bded5772ddb13347 2023-07-16 23:15:29,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/np1/table1/6468ba98939015b1bded5772ddb13347/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:29,050 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6468ba98939015b1bded5772ddb13347; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11857140160, jitterRate=0.10428223013877869}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:29,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6468ba98939015b1bded5772ddb13347: 2023-07-16 23:15:29,051 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347., pid=18, masterSystemTime=1689549329037 2023-07-16 23:15:29,052 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347. 2023-07-16 23:15:29,052 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347. 2023-07-16 23:15:29,052 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=6468ba98939015b1bded5772ddb13347, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35699,1689549326997 2023-07-16 23:15:29,052 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689549329052"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549329052"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549329052"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549329052"}]},"ts":"1689549329052"} 2023-07-16 23:15:29,055 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-16 23:15:29,055 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 6468ba98939015b1bded5772ddb13347, server=jenkins-hbase4.apache.org,35699,1689549326997 in 169 msec 2023-07-16 23:15:29,056 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-16 23:15:29,056 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=6468ba98939015b1bded5772ddb13347, ASSIGN in 331 msec 2023-07-16 23:15:29,057 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 23:15:29,057 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549329057"}]},"ts":"1689549329057"} 2023-07-16 23:15:29,058 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-16 23:15:29,060 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 23:15:29,061 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 389 msec 2023-07-16 23:15:29,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-16 23:15:29,279 INFO [Listener at localhost/41101] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-16 23:15:29,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:29,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-16 23:15:29,282 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 23:15:29,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-16 23:15:29,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-16 23:15:29,300 DEBUG [PEWorker-5] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 23:15:29,301 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53938, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 23:15:29,304 INFO [PEWorker-5] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=23 msec 2023-07-16 23:15:29,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-16 23:15:29,388 INFO [Listener at localhost/41101] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-16 23:15:29,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:29,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:29,390 INFO [Listener at localhost/41101] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-16 23:15:29,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-16 23:15:29,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-16 23:15:29,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 23:15:29,394 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549329394"}]},"ts":"1689549329394"} 2023-07-16 23:15:29,395 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-16 23:15:29,396 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-16 23:15:29,397 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=6468ba98939015b1bded5772ddb13347, UNASSIGN}] 2023-07-16 23:15:29,398 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=6468ba98939015b1bded5772ddb13347, UNASSIGN 2023-07-16 23:15:29,398 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=6468ba98939015b1bded5772ddb13347, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35699,1689549326997 2023-07-16 23:15:29,398 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689549329398"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549329398"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549329398"}]},"ts":"1689549329398"} 2023-07-16 23:15:29,400 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 6468ba98939015b1bded5772ddb13347, server=jenkins-hbase4.apache.org,35699,1689549326997}] 2023-07-16 23:15:29,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 23:15:29,552 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6468ba98939015b1bded5772ddb13347 2023-07-16 23:15:29,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6468ba98939015b1bded5772ddb13347, disabling compactions & flushes 2023-07-16 23:15:29,553 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347. 2023-07-16 23:15:29,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347. 2023-07-16 23:15:29,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347. after waiting 0 ms 2023-07-16 23:15:29,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347. 2023-07-16 23:15:29,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/np1/table1/6468ba98939015b1bded5772ddb13347/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:29,557 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347. 2023-07-16 23:15:29,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6468ba98939015b1bded5772ddb13347: 2023-07-16 23:15:29,559 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6468ba98939015b1bded5772ddb13347 2023-07-16 23:15:29,559 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=6468ba98939015b1bded5772ddb13347, regionState=CLOSED 2023-07-16 23:15:29,559 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689549329559"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549329559"}]},"ts":"1689549329559"} 2023-07-16 23:15:29,561 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-16 23:15:29,562 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 6468ba98939015b1bded5772ddb13347, server=jenkins-hbase4.apache.org,35699,1689549326997 in 160 msec 2023-07-16 23:15:29,563 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-16 23:15:29,563 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=6468ba98939015b1bded5772ddb13347, UNASSIGN in 164 msec 2023-07-16 23:15:29,563 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549329563"}]},"ts":"1689549329563"} 2023-07-16 23:15:29,565 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-16 23:15:29,567 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-16 23:15:29,568 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 177 msec 2023-07-16 23:15:29,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 23:15:29,696 INFO [Listener at localhost/41101] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-16 23:15:29,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-16 23:15:29,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-16 23:15:29,699 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 23:15:29,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-16 23:15:29,700 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 23:15:29,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:29,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 23:15:29,703 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/np1/table1/6468ba98939015b1bded5772ddb13347 2023-07-16 23:15:29,705 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/np1/table1/6468ba98939015b1bded5772ddb13347/fam1, FileablePath, hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/np1/table1/6468ba98939015b1bded5772ddb13347/recovered.edits] 2023-07-16 23:15:29,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-16 23:15:29,711 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/np1/table1/6468ba98939015b1bded5772ddb13347/recovered.edits/4.seqid to hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/archive/data/np1/table1/6468ba98939015b1bded5772ddb13347/recovered.edits/4.seqid 2023-07-16 23:15:29,711 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/.tmp/data/np1/table1/6468ba98939015b1bded5772ddb13347 2023-07-16 23:15:29,711 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-16 23:15:29,714 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 23:15:29,715 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-16 23:15:29,717 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-16 23:15:29,718 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 23:15:29,718 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-16 23:15:29,718 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549329718"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:29,719 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 23:15:29,719 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 6468ba98939015b1bded5772ddb13347, NAME => 'np1:table1,,1689549328671.6468ba98939015b1bded5772ddb13347.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 23:15:29,719 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-16 23:15:29,719 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689549329719"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:29,721 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-16 23:15:29,722 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 23:15:29,723 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 26 msec 2023-07-16 23:15:29,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-16 23:15:29,807 INFO [Listener at localhost/41101] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-16 23:15:29,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-16 23:15:29,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-16 23:15:29,820 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 23:15:29,822 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 23:15:29,824 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 23:15:29,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-16 23:15:29,826 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-16 23:15:29,826 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 23:15:29,826 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 23:15:29,828 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 23:15:29,829 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 16 msec 2023-07-16 23:15:29,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34891] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-16 23:15:29,927 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-16 23:15:29,927 INFO [Listener at localhost/41101] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-16 23:15:29,927 DEBUG [Listener at localhost/41101] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2a4e8813 to 127.0.0.1:58149 2023-07-16 23:15:29,927 DEBUG [Listener at localhost/41101] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:29,927 DEBUG [Listener at localhost/41101] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-16 23:15:29,927 DEBUG [Listener at localhost/41101] util.JVMClusterUtil(257): Found active master hash=1579043695, stopped=false 2023-07-16 23:15:29,927 DEBUG [Listener at localhost/41101] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 23:15:29,928 DEBUG [Listener at localhost/41101] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 23:15:29,928 DEBUG [Listener at localhost/41101] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-16 23:15:29,928 INFO [Listener at localhost/41101] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,34891,1689549326627 2023-07-16 23:15:29,930 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:35699-0x101706b4c080002, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:29,930 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:29,930 INFO [Listener at localhost/41101] procedure2.ProcedureExecutor(629): Stopping 2023-07-16 23:15:29,930 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:29,930 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:36383-0x101706b4c080001, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:29,930 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:33393-0x101706b4c080003, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:29,932 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:29,932 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36383-0x101706b4c080001, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:29,932 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35699-0x101706b4c080002, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:29,932 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33393-0x101706b4c080003, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:29,932 DEBUG [Listener at localhost/41101] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0a911219 to 127.0.0.1:58149 2023-07-16 23:15:29,933 DEBUG [Listener at localhost/41101] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:29,933 INFO [Listener at localhost/41101] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36383,1689549326802' ***** 2023-07-16 23:15:29,933 INFO [Listener at localhost/41101] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 23:15:29,933 INFO [Listener at localhost/41101] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35699,1689549326997' ***** 2023-07-16 23:15:29,933 INFO [Listener at localhost/41101] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 23:15:29,933 INFO [RS:0;jenkins-hbase4:36383] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 23:15:29,933 INFO [RS:1;jenkins-hbase4:35699] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 23:15:29,933 INFO [Listener at localhost/41101] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33393,1689549327156' ***** 2023-07-16 23:15:29,934 INFO [Listener at localhost/41101] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 23:15:29,936 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:29,936 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 23:15:29,937 INFO [RS:2;jenkins-hbase4:33393] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 23:15:29,946 INFO [RS:1;jenkins-hbase4:35699] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3dbde6d6{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:15:29,946 INFO [RS:0;jenkins-hbase4:36383] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@701b0f6c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:15:29,946 INFO [RS:2;jenkins-hbase4:33393] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5732b517{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:15:29,946 INFO [RS:1;jenkins-hbase4:35699] server.AbstractConnector(383): Stopped ServerConnector@35126f63{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 23:15:29,946 INFO [RS:1;jenkins-hbase4:35699] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 23:15:29,946 INFO [RS:0;jenkins-hbase4:36383] server.AbstractConnector(383): Stopped ServerConnector@24873ead{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 23:15:29,946 INFO [RS:2;jenkins-hbase4:33393] server.AbstractConnector(383): Stopped ServerConnector@4046bffa{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 23:15:29,947 INFO [RS:1;jenkins-hbase4:35699] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3ab57fef{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 23:15:29,947 INFO [RS:0;jenkins-hbase4:36383] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 23:15:29,949 INFO [RS:1;jenkins-hbase4:35699] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@42c43c67{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/hadoop.log.dir/,STOPPED} 2023-07-16 23:15:29,947 INFO [RS:2;jenkins-hbase4:33393] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 23:15:29,949 INFO [RS:0;jenkins-hbase4:36383] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3e51c5f8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 23:15:29,949 INFO [RS:2;jenkins-hbase4:33393] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2ce09518{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 23:15:29,950 INFO [RS:0;jenkins-hbase4:36383] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4c0a0bbf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/hadoop.log.dir/,STOPPED} 2023-07-16 23:15:29,950 INFO [RS:2;jenkins-hbase4:33393] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@50cc44fc{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/hadoop.log.dir/,STOPPED} 2023-07-16 23:15:29,950 INFO [RS:1;jenkins-hbase4:35699] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 23:15:29,950 INFO [RS:1;jenkins-hbase4:35699] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 23:15:29,950 INFO [RS:1;jenkins-hbase4:35699] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 23:15:29,950 INFO [RS:1;jenkins-hbase4:35699] regionserver.HRegionServer(3305): Received CLOSE for 066ccc40978a29d1c807c1a979b942ea 2023-07-16 23:15:29,950 INFO [RS:1;jenkins-hbase4:35699] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35699,1689549326997 2023-07-16 23:15:29,950 DEBUG [RS:1;jenkins-hbase4:35699] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x58fdc025 to 127.0.0.1:58149 2023-07-16 23:15:29,950 DEBUG [RS:1;jenkins-hbase4:35699] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:29,951 INFO [RS:1;jenkins-hbase4:35699] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-16 23:15:29,952 DEBUG [RS:1;jenkins-hbase4:35699] regionserver.HRegionServer(1478): Online Regions={066ccc40978a29d1c807c1a979b942ea=hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea.} 2023-07-16 23:15:29,952 INFO [RS:0;jenkins-hbase4:36383] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 23:15:29,952 DEBUG [RS:1;jenkins-hbase4:35699] regionserver.HRegionServer(1504): Waiting on 066ccc40978a29d1c807c1a979b942ea 2023-07-16 23:15:29,952 INFO [RS:0;jenkins-hbase4:36383] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 23:15:29,951 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 066ccc40978a29d1c807c1a979b942ea, disabling compactions & flushes 2023-07-16 23:15:29,952 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 23:15:29,952 INFO [RS:0;jenkins-hbase4:36383] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 23:15:29,953 INFO [RS:2;jenkins-hbase4:33393] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 23:15:29,952 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea. 2023-07-16 23:15:29,953 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 23:15:29,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea. 2023-07-16 23:15:29,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea. after waiting 0 ms 2023-07-16 23:15:29,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea. 2023-07-16 23:15:29,953 INFO [RS:2;jenkins-hbase4:33393] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 23:15:29,954 INFO [RS:2;jenkins-hbase4:33393] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 23:15:29,954 INFO [RS:2;jenkins-hbase4:33393] regionserver.HRegionServer(3305): Received CLOSE for c1d021208cb108667616329e0059c9ec 2023-07-16 23:15:29,953 INFO [RS:0;jenkins-hbase4:36383] regionserver.HRegionServer(3305): Received CLOSE for 0aed32643bffb6f94e0618f4c1e0e0dd 2023-07-16 23:15:29,954 INFO [RS:2;jenkins-hbase4:33393] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33393,1689549327156 2023-07-16 23:15:29,954 DEBUG [RS:2;jenkins-hbase4:33393] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3aa5a540 to 127.0.0.1:58149 2023-07-16 23:15:29,954 INFO [RS:0;jenkins-hbase4:36383] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36383,1689549326802 2023-07-16 23:15:29,954 DEBUG [RS:2;jenkins-hbase4:33393] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:29,954 DEBUG [RS:0;jenkins-hbase4:36383] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x021929e9 to 127.0.0.1:58149 2023-07-16 23:15:29,954 DEBUG [RS:0;jenkins-hbase4:36383] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:29,954 INFO [RS:2;jenkins-hbase4:33393] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 23:15:29,955 INFO [RS:2;jenkins-hbase4:33393] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 23:15:29,955 INFO [RS:2;jenkins-hbase4:33393] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 23:15:29,954 INFO [RS:0;jenkins-hbase4:36383] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-16 23:15:29,955 INFO [RS:2;jenkins-hbase4:33393] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-16 23:15:29,955 DEBUG [RS:0;jenkins-hbase4:36383] regionserver.HRegionServer(1478): Online Regions={0aed32643bffb6f94e0618f4c1e0e0dd=hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd.} 2023-07-16 23:15:29,955 DEBUG [RS:0;jenkins-hbase4:36383] regionserver.HRegionServer(1504): Waiting on 0aed32643bffb6f94e0618f4c1e0e0dd 2023-07-16 23:15:29,955 INFO [RS:2;jenkins-hbase4:33393] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-16 23:15:29,955 DEBUG [RS:2;jenkins-hbase4:33393] regionserver.HRegionServer(1478): Online Regions={c1d021208cb108667616329e0059c9ec=hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec., 1588230740=hbase:meta,,1.1588230740} 2023-07-16 23:15:29,955 DEBUG [RS:2;jenkins-hbase4:33393] regionserver.HRegionServer(1504): Waiting on 1588230740, c1d021208cb108667616329e0059c9ec 2023-07-16 23:15:29,960 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0aed32643bffb6f94e0618f4c1e0e0dd, disabling compactions & flushes 2023-07-16 23:15:29,960 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd. 2023-07-16 23:15:29,960 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c1d021208cb108667616329e0059c9ec, disabling compactions & flushes 2023-07-16 23:15:29,960 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd. 2023-07-16 23:15:29,960 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 23:15:29,960 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 23:15:29,960 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 23:15:29,960 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 23:15:29,960 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 23:15:29,960 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/quota/066ccc40978a29d1c807c1a979b942ea/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:29,961 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-16 23:15:29,960 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec. 2023-07-16 23:15:29,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec. 2023-07-16 23:15:29,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec. after waiting 0 ms 2023-07-16 23:15:29,960 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd. after waiting 0 ms 2023-07-16 23:15:29,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd. 2023-07-16 23:15:29,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec. 2023-07-16 23:15:29,961 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 0aed32643bffb6f94e0618f4c1e0e0dd 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-16 23:15:29,961 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c1d021208cb108667616329e0059c9ec 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-16 23:15:29,961 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea. 2023-07-16 23:15:29,962 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 066ccc40978a29d1c807c1a979b942ea: 2023-07-16 23:15:29,962 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689549328487.066ccc40978a29d1c807c1a979b942ea. 2023-07-16 23:15:29,989 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/namespace/0aed32643bffb6f94e0618f4c1e0e0dd/.tmp/info/1781970e48bb49daa423e336046ab79e 2023-07-16 23:15:29,996 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1781970e48bb49daa423e336046ab79e 2023-07-16 23:15:29,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/namespace/0aed32643bffb6f94e0618f4c1e0e0dd/.tmp/info/1781970e48bb49daa423e336046ab79e as hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/namespace/0aed32643bffb6f94e0618f4c1e0e0dd/info/1781970e48bb49daa423e336046ab79e 2023-07-16 23:15:30,003 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/.tmp/info/4c990e843830405d8d731c6981de1267 2023-07-16 23:15:30,003 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1781970e48bb49daa423e336046ab79e 2023-07-16 23:15:30,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/namespace/0aed32643bffb6f94e0618f4c1e0e0dd/info/1781970e48bb49daa423e336046ab79e, entries=3, sequenceid=8, filesize=5.0 K 2023-07-16 23:15:30,006 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/rsgroup/c1d021208cb108667616329e0059c9ec/.tmp/m/dcf1e5b38a9c4eebb308f12d75077df1 2023-07-16 23:15:30,007 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 0aed32643bffb6f94e0618f4c1e0e0dd in 46ms, sequenceid=8, compaction requested=false 2023-07-16 23:15:30,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-16 23:15:30,012 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4c990e843830405d8d731c6981de1267 2023-07-16 23:15:30,016 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/namespace/0aed32643bffb6f94e0618f4c1e0e0dd/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-16 23:15:30,017 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd. 2023-07-16 23:15:30,017 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0aed32643bffb6f94e0618f4c1e0e0dd: 2023-07-16 23:15:30,017 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689549328035.0aed32643bffb6f94e0618f4c1e0e0dd. 2023-07-16 23:15:30,017 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/rsgroup/c1d021208cb108667616329e0059c9ec/.tmp/m/dcf1e5b38a9c4eebb308f12d75077df1 as hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/rsgroup/c1d021208cb108667616329e0059c9ec/m/dcf1e5b38a9c4eebb308f12d75077df1 2023-07-16 23:15:30,021 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:30,024 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/rsgroup/c1d021208cb108667616329e0059c9ec/m/dcf1e5b38a9c4eebb308f12d75077df1, entries=1, sequenceid=7, filesize=4.9 K 2023-07-16 23:15:30,025 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for c1d021208cb108667616329e0059c9ec in 64ms, sequenceid=7, compaction requested=false 2023-07-16 23:15:30,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-16 23:15:30,027 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:30,029 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/.tmp/rep_barrier/6d7e60a39be74a9b8b0c7e735e5154a6 2023-07-16 23:15:30,051 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6d7e60a39be74a9b8b0c7e735e5154a6 2023-07-16 23:15:30,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/rsgroup/c1d021208cb108667616329e0059c9ec/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-16 23:15:30,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 23:15:30,053 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec. 2023-07-16 23:15:30,053 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c1d021208cb108667616329e0059c9ec: 2023-07-16 23:15:30,053 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689549328144.c1d021208cb108667616329e0059c9ec. 2023-07-16 23:15:30,064 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/.tmp/table/001a703b6871443c88c2d8659b3eb578 2023-07-16 23:15:30,070 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 001a703b6871443c88c2d8659b3eb578 2023-07-16 23:15:30,071 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/.tmp/info/4c990e843830405d8d731c6981de1267 as hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/info/4c990e843830405d8d731c6981de1267 2023-07-16 23:15:30,077 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4c990e843830405d8d731c6981de1267 2023-07-16 23:15:30,078 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/info/4c990e843830405d8d731c6981de1267, entries=32, sequenceid=31, filesize=8.5 K 2023-07-16 23:15:30,079 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/.tmp/rep_barrier/6d7e60a39be74a9b8b0c7e735e5154a6 as hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/rep_barrier/6d7e60a39be74a9b8b0c7e735e5154a6 2023-07-16 23:15:30,088 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6d7e60a39be74a9b8b0c7e735e5154a6 2023-07-16 23:15:30,088 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/rep_barrier/6d7e60a39be74a9b8b0c7e735e5154a6, entries=1, sequenceid=31, filesize=4.9 K 2023-07-16 23:15:30,089 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/.tmp/table/001a703b6871443c88c2d8659b3eb578 as hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/table/001a703b6871443c88c2d8659b3eb578 2023-07-16 23:15:30,095 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 001a703b6871443c88c2d8659b3eb578 2023-07-16 23:15:30,095 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/table/001a703b6871443c88c2d8659b3eb578, entries=8, sequenceid=31, filesize=5.2 K 2023-07-16 23:15:30,096 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 135ms, sequenceid=31, compaction requested=false 2023-07-16 23:15:30,096 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-16 23:15:30,105 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-16 23:15:30,106 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 23:15:30,106 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 23:15:30,106 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 23:15:30,106 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-16 23:15:30,152 INFO [RS:1;jenkins-hbase4:35699] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35699,1689549326997; all regions closed. 2023-07-16 23:15:30,152 DEBUG [RS:1;jenkins-hbase4:35699] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-16 23:15:30,155 INFO [RS:0;jenkins-hbase4:36383] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36383,1689549326802; all regions closed. 2023-07-16 23:15:30,155 DEBUG [RS:0;jenkins-hbase4:36383] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-16 23:15:30,155 INFO [RS:2;jenkins-hbase4:33393] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33393,1689549327156; all regions closed. 2023-07-16 23:15:30,155 DEBUG [RS:2;jenkins-hbase4:33393] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-16 23:15:30,160 DEBUG [RS:1;jenkins-hbase4:35699] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/oldWALs 2023-07-16 23:15:30,160 INFO [RS:1;jenkins-hbase4:35699] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35699%2C1689549326997:(num 1689549327763) 2023-07-16 23:15:30,160 DEBUG [RS:1;jenkins-hbase4:35699] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:30,160 INFO [RS:1;jenkins-hbase4:35699] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:30,161 INFO [RS:1;jenkins-hbase4:35699] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 23:15:30,161 INFO [RS:1;jenkins-hbase4:35699] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 23:15:30,161 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 23:15:30,161 INFO [RS:1;jenkins-hbase4:35699] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 23:15:30,161 INFO [RS:1;jenkins-hbase4:35699] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 23:15:30,165 INFO [RS:1;jenkins-hbase4:35699] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35699 2023-07-16 23:15:30,169 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:30,169 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:35699-0x101706b4c080002, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35699,1689549326997 2023-07-16 23:15:30,169 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:35699-0x101706b4c080002, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:30,169 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:36383-0x101706b4c080001, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35699,1689549326997 2023-07-16 23:15:30,169 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:33393-0x101706b4c080003, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35699,1689549326997 2023-07-16 23:15:30,169 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:36383-0x101706b4c080001, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:30,169 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:33393-0x101706b4c080003, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:30,170 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35699,1689549326997] 2023-07-16 23:15:30,171 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35699,1689549326997; numProcessing=1 2023-07-16 23:15:30,173 DEBUG [RS:0;jenkins-hbase4:36383] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/oldWALs 2023-07-16 23:15:30,173 INFO [RS:0;jenkins-hbase4:36383] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36383%2C1689549326802:(num 1689549327759) 2023-07-16 23:15:30,173 DEBUG [RS:0;jenkins-hbase4:36383] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:30,173 INFO [RS:0;jenkins-hbase4:36383] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:30,174 INFO [RS:0;jenkins-hbase4:36383] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 23:15:30,174 INFO [RS:0;jenkins-hbase4:36383] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 23:15:30,174 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 23:15:30,174 INFO [RS:0;jenkins-hbase4:36383] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 23:15:30,174 INFO [RS:0;jenkins-hbase4:36383] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 23:15:30,174 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35699,1689549326997 already deleted, retry=false 2023-07-16 23:15:30,175 INFO [RS:0;jenkins-hbase4:36383] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36383 2023-07-16 23:15:30,175 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35699,1689549326997 expired; onlineServers=2 2023-07-16 23:15:30,176 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:36383-0x101706b4c080001, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36383,1689549326802 2023-07-16 23:15:30,176 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:33393-0x101706b4c080003, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36383,1689549326802 2023-07-16 23:15:30,177 DEBUG [RS:2;jenkins-hbase4:33393] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/oldWALs 2023-07-16 23:15:30,177 INFO [RS:2;jenkins-hbase4:33393] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33393%2C1689549327156.meta:.meta(num 1689549327983) 2023-07-16 23:15:30,179 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:30,181 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36383,1689549326802] 2023-07-16 23:15:30,181 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36383,1689549326802; numProcessing=2 2023-07-16 23:15:30,182 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36383,1689549326802 already deleted, retry=false 2023-07-16 23:15:30,182 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36383,1689549326802 expired; onlineServers=1 2023-07-16 23:15:30,187 DEBUG [RS:2;jenkins-hbase4:33393] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/oldWALs 2023-07-16 23:15:30,187 INFO [RS:2;jenkins-hbase4:33393] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33393%2C1689549327156:(num 1689549327757) 2023-07-16 23:15:30,187 DEBUG [RS:2;jenkins-hbase4:33393] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:30,187 INFO [RS:2;jenkins-hbase4:33393] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:30,187 INFO [RS:2;jenkins-hbase4:33393] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 23:15:30,187 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 23:15:30,195 INFO [RS:2;jenkins-hbase4:33393] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33393 2023-07-16 23:15:30,204 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:33393-0x101706b4c080003, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33393,1689549327156 2023-07-16 23:15:30,204 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:30,205 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33393,1689549327156] 2023-07-16 23:15:30,206 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33393,1689549327156; numProcessing=3 2023-07-16 23:15:30,207 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33393,1689549327156 already deleted, retry=false 2023-07-16 23:15:30,207 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33393,1689549327156 expired; onlineServers=0 2023-07-16 23:15:30,207 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34891,1689549326627' ***** 2023-07-16 23:15:30,207 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-16 23:15:30,208 DEBUG [M:0;jenkins-hbase4:34891] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5a21d474, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 23:15:30,208 INFO [M:0;jenkins-hbase4:34891] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 23:15:30,210 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-16 23:15:30,210 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:30,210 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 23:15:30,211 INFO [M:0;jenkins-hbase4:34891] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4fa3572f{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-16 23:15:30,211 INFO [M:0;jenkins-hbase4:34891] server.AbstractConnector(383): Stopped ServerConnector@21b7a099{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 23:15:30,211 INFO [M:0;jenkins-hbase4:34891] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 23:15:30,212 INFO [M:0;jenkins-hbase4:34891] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7e549292{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 23:15:30,212 INFO [M:0;jenkins-hbase4:34891] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@487288d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/hadoop.log.dir/,STOPPED} 2023-07-16 23:15:30,212 INFO [M:0;jenkins-hbase4:34891] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34891,1689549326627 2023-07-16 23:15:30,212 INFO [M:0;jenkins-hbase4:34891] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34891,1689549326627; all regions closed. 2023-07-16 23:15:30,212 DEBUG [M:0;jenkins-hbase4:34891] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:30,212 INFO [M:0;jenkins-hbase4:34891] master.HMaster(1491): Stopping master jetty server 2023-07-16 23:15:30,215 INFO [M:0;jenkins-hbase4:34891] server.AbstractConnector(383): Stopped ServerConnector@19e622dc{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 23:15:30,215 DEBUG [M:0;jenkins-hbase4:34891] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-16 23:15:30,215 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-16 23:15:30,215 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689549327575] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689549327575,5,FailOnTimeoutGroup] 2023-07-16 23:15:30,215 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689549327566] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689549327566,5,FailOnTimeoutGroup] 2023-07-16 23:15:30,215 DEBUG [M:0;jenkins-hbase4:34891] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-16 23:15:30,216 INFO [M:0;jenkins-hbase4:34891] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-16 23:15:30,216 INFO [M:0;jenkins-hbase4:34891] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-16 23:15:30,216 INFO [M:0;jenkins-hbase4:34891] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 23:15:30,216 DEBUG [M:0;jenkins-hbase4:34891] master.HMaster(1512): Stopping service threads 2023-07-16 23:15:30,216 INFO [M:0;jenkins-hbase4:34891] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-16 23:15:30,217 ERROR [M:0;jenkins-hbase4:34891] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-16 23:15:30,217 INFO [M:0;jenkins-hbase4:34891] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-16 23:15:30,217 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-16 23:15:30,218 DEBUG [M:0;jenkins-hbase4:34891] zookeeper.ZKUtil(398): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-16 23:15:30,218 WARN [M:0;jenkins-hbase4:34891] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-16 23:15:30,218 INFO [M:0;jenkins-hbase4:34891] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-16 23:15:30,218 INFO [M:0;jenkins-hbase4:34891] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-16 23:15:30,218 DEBUG [M:0;jenkins-hbase4:34891] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 23:15:30,219 INFO [M:0;jenkins-hbase4:34891] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:30,219 DEBUG [M:0;jenkins-hbase4:34891] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:30,219 DEBUG [M:0;jenkins-hbase4:34891] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 23:15:30,219 DEBUG [M:0;jenkins-hbase4:34891] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:30,219 INFO [M:0;jenkins-hbase4:34891] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.95 KB heapSize=109.10 KB 2023-07-16 23:15:30,233 INFO [M:0;jenkins-hbase4:34891] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.95 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7e2fc9302e2a4b4aa93b2bbac5370950 2023-07-16 23:15:30,239 DEBUG [M:0;jenkins-hbase4:34891] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7e2fc9302e2a4b4aa93b2bbac5370950 as hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7e2fc9302e2a4b4aa93b2bbac5370950 2023-07-16 23:15:30,245 INFO [M:0;jenkins-hbase4:34891] regionserver.HStore(1080): Added hdfs://localhost:37199/user/jenkins/test-data/9fb91706-a7e0-4d17-9a6e-8e216e88dcce/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7e2fc9302e2a4b4aa93b2bbac5370950, entries=24, sequenceid=194, filesize=12.4 K 2023-07-16 23:15:30,246 INFO [M:0;jenkins-hbase4:34891] regionserver.HRegion(2948): Finished flush of dataSize ~92.95 KB/95182, heapSize ~109.09 KB/111704, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 27ms, sequenceid=194, compaction requested=false 2023-07-16 23:15:30,248 INFO [M:0;jenkins-hbase4:34891] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:30,248 DEBUG [M:0;jenkins-hbase4:34891] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 23:15:30,252 INFO [M:0;jenkins-hbase4:34891] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-16 23:15:30,252 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 23:15:30,253 INFO [M:0;jenkins-hbase4:34891] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34891 2023-07-16 23:15:30,255 DEBUG [M:0;jenkins-hbase4:34891] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,34891,1689549326627 already deleted, retry=false 2023-07-16 23:15:30,541 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:30,541 INFO [M:0;jenkins-hbase4:34891] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34891,1689549326627; zookeeper connection closed. 2023-07-16 23:15:30,541 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): master:34891-0x101706b4c080000, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:30,642 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:33393-0x101706b4c080003, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:30,642 INFO [RS:2;jenkins-hbase4:33393] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33393,1689549327156; zookeeper connection closed. 2023-07-16 23:15:30,642 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:33393-0x101706b4c080003, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:30,644 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@570e82da] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@570e82da 2023-07-16 23:15:30,742 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:36383-0x101706b4c080001, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:30,742 INFO [RS:0;jenkins-hbase4:36383] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36383,1689549326802; zookeeper connection closed. 2023-07-16 23:15:30,742 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:36383-0x101706b4c080001, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:30,743 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1d0930da] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1d0930da 2023-07-16 23:15:30,842 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:35699-0x101706b4c080002, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:30,842 INFO [RS:1;jenkins-hbase4:35699] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35699,1689549326997; zookeeper connection closed. 2023-07-16 23:15:30,842 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): regionserver:35699-0x101706b4c080002, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:30,843 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@755b6b84] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@755b6b84 2023-07-16 23:15:30,843 INFO [Listener at localhost/41101] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-16 23:15:30,843 WARN [Listener at localhost/41101] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 23:15:30,847 INFO [Listener at localhost/41101] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 23:15:30,953 WARN [BP-524721687-172.31.14.131-1689549325522 heartbeating to localhost/127.0.0.1:37199] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 23:15:30,953 WARN [BP-524721687-172.31.14.131-1689549325522 heartbeating to localhost/127.0.0.1:37199] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-524721687-172.31.14.131-1689549325522 (Datanode Uuid 30b8bad1-1b72-4fc5-9f4c-e62a1aed4e17) service to localhost/127.0.0.1:37199 2023-07-16 23:15:30,954 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/cluster_db64f02e-055c-576e-a616-7b290e554e26/dfs/data/data5/current/BP-524721687-172.31.14.131-1689549325522] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 23:15:30,954 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/cluster_db64f02e-055c-576e-a616-7b290e554e26/dfs/data/data6/current/BP-524721687-172.31.14.131-1689549325522] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 23:15:30,957 WARN [Listener at localhost/41101] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 23:15:30,960 INFO [Listener at localhost/41101] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 23:15:31,066 WARN [BP-524721687-172.31.14.131-1689549325522 heartbeating to localhost/127.0.0.1:37199] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 23:15:31,067 WARN [BP-524721687-172.31.14.131-1689549325522 heartbeating to localhost/127.0.0.1:37199] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-524721687-172.31.14.131-1689549325522 (Datanode Uuid a8ba4a90-130d-430d-820e-4642997076b1) service to localhost/127.0.0.1:37199 2023-07-16 23:15:31,067 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/cluster_db64f02e-055c-576e-a616-7b290e554e26/dfs/data/data3/current/BP-524721687-172.31.14.131-1689549325522] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 23:15:31,068 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/cluster_db64f02e-055c-576e-a616-7b290e554e26/dfs/data/data4/current/BP-524721687-172.31.14.131-1689549325522] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 23:15:31,069 WARN [Listener at localhost/41101] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 23:15:31,072 INFO [Listener at localhost/41101] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 23:15:31,175 WARN [BP-524721687-172.31.14.131-1689549325522 heartbeating to localhost/127.0.0.1:37199] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 23:15:31,175 WARN [BP-524721687-172.31.14.131-1689549325522 heartbeating to localhost/127.0.0.1:37199] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-524721687-172.31.14.131-1689549325522 (Datanode Uuid 4da24e92-35d9-4942-a3cc-ce4d764e194a) service to localhost/127.0.0.1:37199 2023-07-16 23:15:31,177 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/cluster_db64f02e-055c-576e-a616-7b290e554e26/dfs/data/data1/current/BP-524721687-172.31.14.131-1689549325522] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 23:15:31,178 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/cluster_db64f02e-055c-576e-a616-7b290e554e26/dfs/data/data2/current/BP-524721687-172.31.14.131-1689549325522] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 23:15:31,189 INFO [Listener at localhost/41101] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 23:15:31,304 INFO [Listener at localhost/41101] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-16 23:15:31,340 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-16 23:15:31,340 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-16 23:15:31,340 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/hadoop.log.dir so I do NOT create it in target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87 2023-07-16 23:15:31,340 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e67edb43-459e-2a51-0dfd-51f61a1f8031/hadoop.tmp.dir so I do NOT create it in target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87 2023-07-16 23:15:31,340 INFO [Listener at localhost/41101] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288, deleteOnExit=true 2023-07-16 23:15:31,341 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-16 23:15:31,341 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/test.cache.data in system properties and HBase conf 2023-07-16 23:15:31,341 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/hadoop.tmp.dir in system properties and HBase conf 2023-07-16 23:15:31,341 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/hadoop.log.dir in system properties and HBase conf 2023-07-16 23:15:31,341 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-16 23:15:31,341 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-16 23:15:31,341 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-16 23:15:31,341 DEBUG [Listener at localhost/41101] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-16 23:15:31,342 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-16 23:15:31,342 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-16 23:15:31,342 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-16 23:15:31,342 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 23:15:31,342 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-16 23:15:31,342 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-16 23:15:31,342 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 23:15:31,342 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 23:15:31,342 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-16 23:15:31,342 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/nfs.dump.dir in system properties and HBase conf 2023-07-16 23:15:31,342 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/java.io.tmpdir in system properties and HBase conf 2023-07-16 23:15:31,343 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 23:15:31,343 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-16 23:15:31,343 INFO [Listener at localhost/41101] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-16 23:15:31,347 WARN [Listener at localhost/41101] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 23:15:31,347 WARN [Listener at localhost/41101] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 23:15:31,390 WARN [Listener at localhost/41101] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 23:15:31,393 INFO [Listener at localhost/41101] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 23:15:31,398 INFO [Listener at localhost/41101] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/java.io.tmpdir/Jetty_localhost_39623_hdfs____7861kr/webapp 2023-07-16 23:15:31,403 DEBUG [Listener at localhost/41101-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101706b4c08000a, quorum=127.0.0.1:58149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-16 23:15:31,403 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101706b4c08000a, quorum=127.0.0.1:58149, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-16 23:15:31,491 INFO [Listener at localhost/41101] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39623 2023-07-16 23:15:31,495 WARN [Listener at localhost/41101] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 23:15:31,496 WARN [Listener at localhost/41101] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 23:15:31,543 WARN [Listener at localhost/43549] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 23:15:31,560 WARN [Listener at localhost/43549] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 23:15:31,562 WARN [Listener at localhost/43549] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 23:15:31,563 INFO [Listener at localhost/43549] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 23:15:31,567 INFO [Listener at localhost/43549] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/java.io.tmpdir/Jetty_localhost_45613_datanode____qkewbi/webapp 2023-07-16 23:15:31,678 INFO [Listener at localhost/43549] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45613 2023-07-16 23:15:31,687 WARN [Listener at localhost/42591] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 23:15:31,708 WARN [Listener at localhost/42591] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 23:15:31,710 WARN [Listener at localhost/42591] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 23:15:31,711 INFO [Listener at localhost/42591] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 23:15:31,714 INFO [Listener at localhost/42591] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/java.io.tmpdir/Jetty_localhost_38693_datanode____.h01qha/webapp 2023-07-16 23:15:31,792 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xef85ef64aee38619: Processing first storage report for DS-973717e4-5fc5-4800-b515-00829bd200b6 from datanode 4969de56-b7d8-455c-8b67-b95c8e88d30a 2023-07-16 23:15:31,793 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xef85ef64aee38619: from storage DS-973717e4-5fc5-4800-b515-00829bd200b6 node DatanodeRegistration(127.0.0.1:39973, datanodeUuid=4969de56-b7d8-455c-8b67-b95c8e88d30a, infoPort=43667, infoSecurePort=0, ipcPort=42591, storageInfo=lv=-57;cid=testClusterID;nsid=1120617444;c=1689549331350), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 23:15:31,793 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xef85ef64aee38619: Processing first storage report for DS-4afb8d63-4161-4f64-8f39-ea2a4c0fa71e from datanode 4969de56-b7d8-455c-8b67-b95c8e88d30a 2023-07-16 23:15:31,793 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xef85ef64aee38619: from storage DS-4afb8d63-4161-4f64-8f39-ea2a4c0fa71e node DatanodeRegistration(127.0.0.1:39973, datanodeUuid=4969de56-b7d8-455c-8b67-b95c8e88d30a, infoPort=43667, infoSecurePort=0, ipcPort=42591, storageInfo=lv=-57;cid=testClusterID;nsid=1120617444;c=1689549331350), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 23:15:31,815 INFO [Listener at localhost/42591] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38693 2023-07-16 23:15:31,822 WARN [Listener at localhost/42007] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 23:15:31,845 WARN [Listener at localhost/42007] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 23:15:31,849 WARN [Listener at localhost/42007] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 23:15:31,851 INFO [Listener at localhost/42007] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 23:15:31,855 INFO [Listener at localhost/42007] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/java.io.tmpdir/Jetty_localhost_45961_datanode____.1rn1mu/webapp 2023-07-16 23:15:31,930 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb0e8adc942dd4777: Processing first storage report for DS-1425cc32-22f2-4ace-81d9-6ff3f5abef70 from datanode 1d13b32b-9886-4cd6-b4cb-3e7d329f625a 2023-07-16 23:15:31,930 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb0e8adc942dd4777: from storage DS-1425cc32-22f2-4ace-81d9-6ff3f5abef70 node DatanodeRegistration(127.0.0.1:35851, datanodeUuid=1d13b32b-9886-4cd6-b4cb-3e7d329f625a, infoPort=37003, infoSecurePort=0, ipcPort=42007, storageInfo=lv=-57;cid=testClusterID;nsid=1120617444;c=1689549331350), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-16 23:15:31,931 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb0e8adc942dd4777: Processing first storage report for DS-12379726-20ec-4810-9206-12f10bca421d from datanode 1d13b32b-9886-4cd6-b4cb-3e7d329f625a 2023-07-16 23:15:31,931 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb0e8adc942dd4777: from storage DS-12379726-20ec-4810-9206-12f10bca421d node DatanodeRegistration(127.0.0.1:35851, datanodeUuid=1d13b32b-9886-4cd6-b4cb-3e7d329f625a, infoPort=37003, infoSecurePort=0, ipcPort=42007, storageInfo=lv=-57;cid=testClusterID;nsid=1120617444;c=1689549331350), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 23:15:31,953 INFO [Listener at localhost/42007] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45961 2023-07-16 23:15:31,962 WARN [Listener at localhost/45635] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 23:15:32,070 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe58a7478f5cd5553: Processing first storage report for DS-ff08177b-5d92-4d13-8401-e64693c8a26c from datanode b391c26e-f0ff-4650-88eb-d68c4700f58c 2023-07-16 23:15:32,070 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe58a7478f5cd5553: from storage DS-ff08177b-5d92-4d13-8401-e64693c8a26c node DatanodeRegistration(127.0.0.1:38277, datanodeUuid=b391c26e-f0ff-4650-88eb-d68c4700f58c, infoPort=32967, infoSecurePort=0, ipcPort=45635, storageInfo=lv=-57;cid=testClusterID;nsid=1120617444;c=1689549331350), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-16 23:15:32,071 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe58a7478f5cd5553: Processing first storage report for DS-30080b35-ee81-43d6-a01f-a21ccd5398f5 from datanode b391c26e-f0ff-4650-88eb-d68c4700f58c 2023-07-16 23:15:32,071 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe58a7478f5cd5553: from storage DS-30080b35-ee81-43d6-a01f-a21ccd5398f5 node DatanodeRegistration(127.0.0.1:38277, datanodeUuid=b391c26e-f0ff-4650-88eb-d68c4700f58c, infoPort=32967, infoSecurePort=0, ipcPort=45635, storageInfo=lv=-57;cid=testClusterID;nsid=1120617444;c=1689549331350), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 23:15:32,075 DEBUG [Listener at localhost/45635] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87 2023-07-16 23:15:32,077 INFO [Listener at localhost/45635] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/zookeeper_0, clientPort=51389, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-16 23:15:32,078 INFO [Listener at localhost/45635] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=51389 2023-07-16 23:15:32,079 INFO [Listener at localhost/45635] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:32,079 INFO [Listener at localhost/45635] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:32,094 INFO [Listener at localhost/45635] util.FSUtils(471): Created version file at hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2 with version=8 2023-07-16 23:15:32,094 INFO [Listener at localhost/45635] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:34675/user/jenkins/test-data/c512a1f0-709c-d1d3-3eab-b51ffcff6002/hbase-staging 2023-07-16 23:15:32,095 DEBUG [Listener at localhost/45635] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-16 23:15:32,095 DEBUG [Listener at localhost/45635] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-16 23:15:32,095 DEBUG [Listener at localhost/45635] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-16 23:15:32,095 DEBUG [Listener at localhost/45635] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-16 23:15:32,096 INFO [Listener at localhost/45635] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 23:15:32,096 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:32,097 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:32,097 INFO [Listener at localhost/45635] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 23:15:32,097 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:32,097 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 23:15:32,097 INFO [Listener at localhost/45635] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 23:15:32,099 INFO [Listener at localhost/45635] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45129 2023-07-16 23:15:32,100 INFO [Listener at localhost/45635] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:32,101 INFO [Listener at localhost/45635] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:32,102 INFO [Listener at localhost/45635] zookeeper.RecoverableZooKeeper(93): Process identifier=master:45129 connecting to ZooKeeper ensemble=127.0.0.1:51389 2023-07-16 23:15:32,109 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:451290x0, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 23:15:32,110 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:45129-0x101706b61700000 connected 2023-07-16 23:15:32,133 DEBUG [Listener at localhost/45635] zookeeper.ZKUtil(164): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 23:15:32,133 DEBUG [Listener at localhost/45635] zookeeper.ZKUtil(164): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:32,133 DEBUG [Listener at localhost/45635] zookeeper.ZKUtil(164): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 23:15:32,134 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45129 2023-07-16 23:15:32,135 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45129 2023-07-16 23:15:32,135 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45129 2023-07-16 23:15:32,136 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45129 2023-07-16 23:15:32,137 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45129 2023-07-16 23:15:32,138 INFO [Listener at localhost/45635] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 23:15:32,138 INFO [Listener at localhost/45635] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 23:15:32,139 INFO [Listener at localhost/45635] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 23:15:32,139 INFO [Listener at localhost/45635] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-16 23:15:32,139 INFO [Listener at localhost/45635] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 23:15:32,139 INFO [Listener at localhost/45635] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 23:15:32,139 INFO [Listener at localhost/45635] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 23:15:32,140 INFO [Listener at localhost/45635] http.HttpServer(1146): Jetty bound to port 34471 2023-07-16 23:15:32,140 INFO [Listener at localhost/45635] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 23:15:32,143 INFO [Listener at localhost/45635] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:32,143 INFO [Listener at localhost/45635] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@157e3680{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/hadoop.log.dir/,AVAILABLE} 2023-07-16 23:15:32,143 INFO [Listener at localhost/45635] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:32,143 INFO [Listener at localhost/45635] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@32bc0174{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 23:15:32,258 INFO [Listener at localhost/45635] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 23:15:32,259 INFO [Listener at localhost/45635] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 23:15:32,259 INFO [Listener at localhost/45635] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 23:15:32,259 INFO [Listener at localhost/45635] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 23:15:32,260 INFO [Listener at localhost/45635] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:32,262 INFO [Listener at localhost/45635] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1e20b2e1{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/java.io.tmpdir/jetty-0_0_0_0-34471-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5516073654171028804/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-16 23:15:32,263 INFO [Listener at localhost/45635] server.AbstractConnector(333): Started ServerConnector@5fb63076{HTTP/1.1, (http/1.1)}{0.0.0.0:34471} 2023-07-16 23:15:32,263 INFO [Listener at localhost/45635] server.Server(415): Started @43887ms 2023-07-16 23:15:32,263 INFO [Listener at localhost/45635] master.HMaster(444): hbase.rootdir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2, hbase.cluster.distributed=false 2023-07-16 23:15:32,277 INFO [Listener at localhost/45635] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 23:15:32,277 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:32,277 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:32,277 INFO [Listener at localhost/45635] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 23:15:32,277 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:32,277 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 23:15:32,277 INFO [Listener at localhost/45635] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 23:15:32,278 INFO [Listener at localhost/45635] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39573 2023-07-16 23:15:32,278 INFO [Listener at localhost/45635] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 23:15:32,279 DEBUG [Listener at localhost/45635] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 23:15:32,280 INFO [Listener at localhost/45635] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:32,281 INFO [Listener at localhost/45635] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:32,283 INFO [Listener at localhost/45635] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39573 connecting to ZooKeeper ensemble=127.0.0.1:51389 2023-07-16 23:15:32,286 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:395730x0, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 23:15:32,288 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39573-0x101706b61700001 connected 2023-07-16 23:15:32,288 DEBUG [Listener at localhost/45635] zookeeper.ZKUtil(164): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 23:15:32,288 DEBUG [Listener at localhost/45635] zookeeper.ZKUtil(164): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:32,289 DEBUG [Listener at localhost/45635] zookeeper.ZKUtil(164): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 23:15:32,289 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39573 2023-07-16 23:15:32,289 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39573 2023-07-16 23:15:32,290 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39573 2023-07-16 23:15:32,290 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39573 2023-07-16 23:15:32,290 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39573 2023-07-16 23:15:32,292 INFO [Listener at localhost/45635] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 23:15:32,292 INFO [Listener at localhost/45635] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 23:15:32,292 INFO [Listener at localhost/45635] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 23:15:32,293 INFO [Listener at localhost/45635] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 23:15:32,293 INFO [Listener at localhost/45635] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 23:15:32,293 INFO [Listener at localhost/45635] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 23:15:32,293 INFO [Listener at localhost/45635] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 23:15:32,294 INFO [Listener at localhost/45635] http.HttpServer(1146): Jetty bound to port 38319 2023-07-16 23:15:32,294 INFO [Listener at localhost/45635] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 23:15:32,297 INFO [Listener at localhost/45635] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:32,297 INFO [Listener at localhost/45635] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@614b4af3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/hadoop.log.dir/,AVAILABLE} 2023-07-16 23:15:32,298 INFO [Listener at localhost/45635] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:32,298 INFO [Listener at localhost/45635] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@68a51037{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 23:15:32,410 INFO [Listener at localhost/45635] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 23:15:32,410 INFO [Listener at localhost/45635] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 23:15:32,411 INFO [Listener at localhost/45635] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 23:15:32,411 INFO [Listener at localhost/45635] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 23:15:32,412 INFO [Listener at localhost/45635] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:32,412 INFO [Listener at localhost/45635] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4b3db42d{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/java.io.tmpdir/jetty-0_0_0_0-38319-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7133610833841848674/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:15:32,414 INFO [Listener at localhost/45635] server.AbstractConnector(333): Started ServerConnector@46225b1{HTTP/1.1, (http/1.1)}{0.0.0.0:38319} 2023-07-16 23:15:32,414 INFO [Listener at localhost/45635] server.Server(415): Started @44038ms 2023-07-16 23:15:32,425 INFO [Listener at localhost/45635] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 23:15:32,425 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:32,425 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:32,425 INFO [Listener at localhost/45635] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 23:15:32,425 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:32,425 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 23:15:32,425 INFO [Listener at localhost/45635] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 23:15:32,426 INFO [Listener at localhost/45635] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37649 2023-07-16 23:15:32,426 INFO [Listener at localhost/45635] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 23:15:32,428 DEBUG [Listener at localhost/45635] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 23:15:32,428 INFO [Listener at localhost/45635] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:32,429 INFO [Listener at localhost/45635] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:32,430 INFO [Listener at localhost/45635] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37649 connecting to ZooKeeper ensemble=127.0.0.1:51389 2023-07-16 23:15:32,434 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:376490x0, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 23:15:32,436 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37649-0x101706b61700002 connected 2023-07-16 23:15:32,436 DEBUG [Listener at localhost/45635] zookeeper.ZKUtil(164): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 23:15:32,437 DEBUG [Listener at localhost/45635] zookeeper.ZKUtil(164): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:32,437 DEBUG [Listener at localhost/45635] zookeeper.ZKUtil(164): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 23:15:32,437 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37649 2023-07-16 23:15:32,438 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37649 2023-07-16 23:15:32,438 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37649 2023-07-16 23:15:32,438 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37649 2023-07-16 23:15:32,438 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37649 2023-07-16 23:15:32,440 INFO [Listener at localhost/45635] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 23:15:32,440 INFO [Listener at localhost/45635] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 23:15:32,440 INFO [Listener at localhost/45635] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 23:15:32,440 INFO [Listener at localhost/45635] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 23:15:32,441 INFO [Listener at localhost/45635] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 23:15:32,441 INFO [Listener at localhost/45635] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 23:15:32,441 INFO [Listener at localhost/45635] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 23:15:32,441 INFO [Listener at localhost/45635] http.HttpServer(1146): Jetty bound to port 41265 2023-07-16 23:15:32,441 INFO [Listener at localhost/45635] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 23:15:32,444 INFO [Listener at localhost/45635] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:32,444 INFO [Listener at localhost/45635] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@41703287{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/hadoop.log.dir/,AVAILABLE} 2023-07-16 23:15:32,444 INFO [Listener at localhost/45635] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:32,444 INFO [Listener at localhost/45635] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@57997e99{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 23:15:32,556 INFO [Listener at localhost/45635] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 23:15:32,557 INFO [Listener at localhost/45635] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 23:15:32,557 INFO [Listener at localhost/45635] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 23:15:32,557 INFO [Listener at localhost/45635] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 23:15:32,558 INFO [Listener at localhost/45635] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:32,558 INFO [Listener at localhost/45635] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@133cc24f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/java.io.tmpdir/jetty-0_0_0_0-41265-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3505068895007629233/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:15:32,561 INFO [Listener at localhost/45635] server.AbstractConnector(333): Started ServerConnector@30f143f0{HTTP/1.1, (http/1.1)}{0.0.0.0:41265} 2023-07-16 23:15:32,561 INFO [Listener at localhost/45635] server.Server(415): Started @44185ms 2023-07-16 23:15:32,578 INFO [Listener at localhost/45635] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 23:15:32,578 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:32,578 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:32,578 INFO [Listener at localhost/45635] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 23:15:32,578 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:32,578 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 23:15:32,578 INFO [Listener at localhost/45635] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 23:15:32,579 INFO [Listener at localhost/45635] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33109 2023-07-16 23:15:32,580 INFO [Listener at localhost/45635] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 23:15:32,581 DEBUG [Listener at localhost/45635] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 23:15:32,582 INFO [Listener at localhost/45635] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:32,583 INFO [Listener at localhost/45635] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:32,584 INFO [Listener at localhost/45635] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33109 connecting to ZooKeeper ensemble=127.0.0.1:51389 2023-07-16 23:15:32,588 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:331090x0, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 23:15:32,589 DEBUG [Listener at localhost/45635] zookeeper.ZKUtil(164): regionserver:331090x0, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 23:15:32,589 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33109-0x101706b61700003 connected 2023-07-16 23:15:32,590 DEBUG [Listener at localhost/45635] zookeeper.ZKUtil(164): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:32,590 DEBUG [Listener at localhost/45635] zookeeper.ZKUtil(164): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 23:15:32,590 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33109 2023-07-16 23:15:32,591 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33109 2023-07-16 23:15:32,591 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33109 2023-07-16 23:15:32,591 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33109 2023-07-16 23:15:32,592 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33109 2023-07-16 23:15:32,593 INFO [Listener at localhost/45635] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 23:15:32,593 INFO [Listener at localhost/45635] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 23:15:32,593 INFO [Listener at localhost/45635] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 23:15:32,594 INFO [Listener at localhost/45635] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 23:15:32,594 INFO [Listener at localhost/45635] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 23:15:32,594 INFO [Listener at localhost/45635] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 23:15:32,594 INFO [Listener at localhost/45635] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 23:15:32,595 INFO [Listener at localhost/45635] http.HttpServer(1146): Jetty bound to port 45039 2023-07-16 23:15:32,595 INFO [Listener at localhost/45635] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 23:15:32,596 INFO [Listener at localhost/45635] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:32,596 INFO [Listener at localhost/45635] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@24cd1638{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/hadoop.log.dir/,AVAILABLE} 2023-07-16 23:15:32,596 INFO [Listener at localhost/45635] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:32,596 INFO [Listener at localhost/45635] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@399f6210{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 23:15:32,711 INFO [Listener at localhost/45635] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 23:15:32,712 INFO [Listener at localhost/45635] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 23:15:32,712 INFO [Listener at localhost/45635] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 23:15:32,712 INFO [Listener at localhost/45635] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 23:15:32,713 INFO [Listener at localhost/45635] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:32,714 INFO [Listener at localhost/45635] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@77eff808{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/java.io.tmpdir/jetty-0_0_0_0-45039-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1902150603769385945/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:15:32,715 INFO [Listener at localhost/45635] server.AbstractConnector(333): Started ServerConnector@69c9dac6{HTTP/1.1, (http/1.1)}{0.0.0.0:45039} 2023-07-16 23:15:32,715 INFO [Listener at localhost/45635] server.Server(415): Started @44339ms 2023-07-16 23:15:32,717 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 23:15:32,720 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@4864ad99{HTTP/1.1, (http/1.1)}{0.0.0.0:46855} 2023-07-16 23:15:32,720 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @44344ms 2023-07-16 23:15:32,720 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,45129,1689549332096 2023-07-16 23:15:32,722 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 23:15:32,722 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,45129,1689549332096 2023-07-16 23:15:32,723 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 23:15:32,723 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 23:15:32,723 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 23:15:32,723 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 23:15:32,725 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:32,725 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 23:15:32,727 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 23:15:32,727 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,45129,1689549332096 from backup master directory 2023-07-16 23:15:32,730 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,45129,1689549332096 2023-07-16 23:15:32,730 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 23:15:32,730 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 23:15:32,730 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,45129,1689549332096 2023-07-16 23:15:32,743 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/hbase.id with ID: 754cdeef-e017-491b-8b71-5f8b38598b77 2023-07-16 23:15:32,753 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:32,756 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:32,769 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x33538665 to 127.0.0.1:51389 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:15:32,773 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6f33810d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:15:32,774 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:32,774 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-16 23:15:32,774 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 23:15:32,776 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/MasterData/data/master/store-tmp 2023-07-16 23:15:32,784 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:32,784 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 23:15:32,784 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:32,784 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:32,784 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 23:15:32,784 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:32,784 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:32,784 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 23:15:32,785 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/MasterData/WALs/jenkins-hbase4.apache.org,45129,1689549332096 2023-07-16 23:15:32,787 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45129%2C1689549332096, suffix=, logDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/MasterData/WALs/jenkins-hbase4.apache.org,45129,1689549332096, archiveDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/MasterData/oldWALs, maxLogs=10 2023-07-16 23:15:32,806 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35851,DS-1425cc32-22f2-4ace-81d9-6ff3f5abef70,DISK] 2023-07-16 23:15:32,808 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38277,DS-ff08177b-5d92-4d13-8401-e64693c8a26c,DISK] 2023-07-16 23:15:32,808 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39973,DS-973717e4-5fc5-4800-b515-00829bd200b6,DISK] 2023-07-16 23:15:32,812 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/MasterData/WALs/jenkins-hbase4.apache.org,45129,1689549332096/jenkins-hbase4.apache.org%2C45129%2C1689549332096.1689549332787 2023-07-16 23:15:32,812 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35851,DS-1425cc32-22f2-4ace-81d9-6ff3f5abef70,DISK], DatanodeInfoWithStorage[127.0.0.1:38277,DS-ff08177b-5d92-4d13-8401-e64693c8a26c,DISK], DatanodeInfoWithStorage[127.0.0.1:39973,DS-973717e4-5fc5-4800-b515-00829bd200b6,DISK]] 2023-07-16 23:15:32,812 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:32,812 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:32,812 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 23:15:32,812 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 23:15:32,814 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-16 23:15:32,816 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-16 23:15:32,816 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-16 23:15:32,816 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:32,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 23:15:32,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 23:15:32,820 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 23:15:32,823 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:32,823 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10120423680, jitterRate=-0.057462096214294434}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:32,823 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 23:15:32,826 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-16 23:15:32,827 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-16 23:15:32,827 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-16 23:15:32,827 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-16 23:15:32,828 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-16 23:15:32,828 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-16 23:15:32,828 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-16 23:15:32,829 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-16 23:15:32,829 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-16 23:15:32,830 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-16 23:15:32,830 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-16 23:15:32,830 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-16 23:15:32,832 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:32,833 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-16 23:15:32,833 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-16 23:15:32,834 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-16 23:15:32,836 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:32,836 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:32,836 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:32,836 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:32,836 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:32,836 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,45129,1689549332096, sessionid=0x101706b61700000, setting cluster-up flag (Was=false) 2023-07-16 23:15:32,841 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:32,851 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-16 23:15:32,852 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45129,1689549332096 2023-07-16 23:15:32,855 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:32,866 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-16 23:15:32,866 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45129,1689549332096 2023-07-16 23:15:32,867 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.hbase-snapshot/.tmp 2023-07-16 23:15:32,868 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-16 23:15:32,868 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-16 23:15:32,868 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-16 23:15:32,869 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45129,1689549332096] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 23:15:32,869 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-16 23:15:32,870 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-16 23:15:32,880 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 23:15:32,880 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 23:15:32,880 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 23:15:32,880 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 23:15:32,880 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 23:15:32,880 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 23:15:32,881 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 23:15:32,881 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 23:15:32,881 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-16 23:15:32,881 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:32,881 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 23:15:32,881 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:32,882 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689549362882 2023-07-16 23:15:32,882 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-16 23:15:32,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-16 23:15:32,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-16 23:15:32,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-16 23:15:32,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-16 23:15:32,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-16 23:15:32,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:32,883 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 23:15:32,883 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-16 23:15:32,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-16 23:15:32,884 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-16 23:15:32,884 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-16 23:15:32,884 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-16 23:15:32,884 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-16 23:15:32,884 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689549332884,5,FailOnTimeoutGroup] 2023-07-16 23:15:32,885 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:32,885 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689549332884,5,FailOnTimeoutGroup] 2023-07-16 23:15:32,885 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:32,885 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-16 23:15:32,885 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:32,885 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:32,897 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 23:15:32,898 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 23:15:32,898 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2 2023-07-16 23:15:32,911 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:32,912 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 23:15:32,914 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/info 2023-07-16 23:15:32,914 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 23:15:32,915 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:32,915 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 23:15:32,920 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/rep_barrier 2023-07-16 23:15:32,920 INFO [RS:1;jenkins-hbase4:37649] regionserver.HRegionServer(951): ClusterId : 754cdeef-e017-491b-8b71-5f8b38598b77 2023-07-16 23:15:32,920 INFO [RS:2;jenkins-hbase4:33109] regionserver.HRegionServer(951): ClusterId : 754cdeef-e017-491b-8b71-5f8b38598b77 2023-07-16 23:15:32,922 DEBUG [RS:1;jenkins-hbase4:37649] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 23:15:32,923 DEBUG [RS:2;jenkins-hbase4:33109] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 23:15:32,921 INFO [RS:0;jenkins-hbase4:39573] regionserver.HRegionServer(951): ClusterId : 754cdeef-e017-491b-8b71-5f8b38598b77 2023-07-16 23:15:32,920 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 23:15:32,923 DEBUG [RS:0;jenkins-hbase4:39573] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 23:15:32,924 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:32,924 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 23:15:32,926 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/table 2023-07-16 23:15:32,926 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 23:15:32,926 DEBUG [RS:2;jenkins-hbase4:33109] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 23:15:32,927 DEBUG [RS:2;jenkins-hbase4:33109] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 23:15:32,927 DEBUG [RS:1;jenkins-hbase4:37649] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 23:15:32,927 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:32,927 DEBUG [RS:0;jenkins-hbase4:39573] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 23:15:32,927 DEBUG [RS:0;jenkins-hbase4:39573] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 23:15:32,927 DEBUG [RS:1;jenkins-hbase4:37649] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 23:15:32,927 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740 2023-07-16 23:15:32,928 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740 2023-07-16 23:15:32,929 DEBUG [RS:2;jenkins-hbase4:33109] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 23:15:32,930 DEBUG [RS:0;jenkins-hbase4:39573] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 23:15:32,931 DEBUG [RS:2;jenkins-hbase4:33109] zookeeper.ReadOnlyZKClient(139): Connect 0x1ce9d870 to 127.0.0.1:51389 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:15:32,931 DEBUG [RS:1;jenkins-hbase4:37649] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 23:15:32,932 DEBUG [RS:0;jenkins-hbase4:39573] zookeeper.ReadOnlyZKClient(139): Connect 0x5b80f6a2 to 127.0.0.1:51389 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:15:32,933 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 23:15:32,933 DEBUG [RS:1;jenkins-hbase4:37649] zookeeper.ReadOnlyZKClient(139): Connect 0x26d2d4a2 to 127.0.0.1:51389 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:15:32,936 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 23:15:32,941 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:32,942 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11448464000, jitterRate=0.06622129678726196}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 23:15:32,942 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 23:15:32,942 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 23:15:32,942 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 23:15:32,942 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 23:15:32,942 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 23:15:32,942 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 23:15:32,943 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 23:15:32,943 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 23:15:32,944 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 23:15:32,944 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-16 23:15:32,944 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-16 23:15:32,945 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-16 23:15:32,946 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-16 23:15:32,950 DEBUG [RS:0;jenkins-hbase4:39573] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1f3da484, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:15:32,950 DEBUG [RS:0;jenkins-hbase4:39573] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6a11cace, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 23:15:32,951 DEBUG [RS:1;jenkins-hbase4:37649] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@28152e08, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:15:32,951 DEBUG [RS:1;jenkins-hbase4:37649] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2c2abfd2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 23:15:32,953 DEBUG [RS:2;jenkins-hbase4:33109] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@122e259d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:15:32,953 DEBUG [RS:2;jenkins-hbase4:33109] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1611b300, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 23:15:32,960 DEBUG [RS:0;jenkins-hbase4:39573] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:39573 2023-07-16 23:15:32,961 INFO [RS:0;jenkins-hbase4:39573] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 23:15:32,961 INFO [RS:0;jenkins-hbase4:39573] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 23:15:32,961 DEBUG [RS:0;jenkins-hbase4:39573] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 23:15:32,961 INFO [RS:0;jenkins-hbase4:39573] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45129,1689549332096 with isa=jenkins-hbase4.apache.org/172.31.14.131:39573, startcode=1689549332276 2023-07-16 23:15:32,964 DEBUG [RS:0;jenkins-hbase4:39573] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 23:15:32,971 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47537, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 23:15:32,971 DEBUG [RS:1;jenkins-hbase4:37649] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:37649 2023-07-16 23:15:32,971 INFO [RS:1;jenkins-hbase4:37649] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 23:15:32,971 INFO [RS:1;jenkins-hbase4:37649] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 23:15:32,972 DEBUG [RS:2;jenkins-hbase4:33109] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:33109 2023-07-16 23:15:32,973 INFO [RS:2;jenkins-hbase4:33109] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 23:15:32,973 INFO [RS:2;jenkins-hbase4:33109] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 23:15:32,973 DEBUG [RS:2;jenkins-hbase4:33109] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 23:15:32,972 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45129] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39573,1689549332276 2023-07-16 23:15:32,972 DEBUG [RS:1;jenkins-hbase4:37649] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 23:15:32,973 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45129,1689549332096] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 23:15:32,974 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45129,1689549332096] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-16 23:15:32,974 DEBUG [RS:0;jenkins-hbase4:39573] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2 2023-07-16 23:15:32,974 INFO [RS:2;jenkins-hbase4:33109] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45129,1689549332096 with isa=jenkins-hbase4.apache.org/172.31.14.131:33109, startcode=1689549332577 2023-07-16 23:15:32,974 DEBUG [RS:0;jenkins-hbase4:39573] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43549 2023-07-16 23:15:32,975 DEBUG [RS:0;jenkins-hbase4:39573] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34471 2023-07-16 23:15:32,975 DEBUG [RS:2;jenkins-hbase4:33109] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 23:15:32,975 INFO [RS:1;jenkins-hbase4:37649] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45129,1689549332096 with isa=jenkins-hbase4.apache.org/172.31.14.131:37649, startcode=1689549332425 2023-07-16 23:15:32,975 DEBUG [RS:1;jenkins-hbase4:37649] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 23:15:32,976 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:32,977 DEBUG [RS:0;jenkins-hbase4:39573] zookeeper.ZKUtil(162): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39573,1689549332276 2023-07-16 23:15:32,977 WARN [RS:0;jenkins-hbase4:39573] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 23:15:32,977 INFO [RS:0;jenkins-hbase4:39573] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 23:15:32,977 DEBUG [RS:0;jenkins-hbase4:39573] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/WALs/jenkins-hbase4.apache.org,39573,1689549332276 2023-07-16 23:15:32,979 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58197, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 23:15:32,979 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54161, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 23:15:32,979 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45129] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33109,1689549332577 2023-07-16 23:15:32,980 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45129,1689549332096] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 23:15:32,980 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45129,1689549332096] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-16 23:15:32,980 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45129] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:32,980 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45129,1689549332096] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 23:15:32,980 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45129,1689549332096] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-16 23:15:32,980 DEBUG [RS:1;jenkins-hbase4:37649] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2 2023-07-16 23:15:32,980 DEBUG [RS:1;jenkins-hbase4:37649] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43549 2023-07-16 23:15:32,980 DEBUG [RS:1;jenkins-hbase4:37649] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34471 2023-07-16 23:15:32,982 DEBUG [RS:2;jenkins-hbase4:33109] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2 2023-07-16 23:15:32,982 DEBUG [RS:2;jenkins-hbase4:33109] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43549 2023-07-16 23:15:32,983 DEBUG [RS:2;jenkins-hbase4:33109] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34471 2023-07-16 23:15:32,988 DEBUG [RS:1;jenkins-hbase4:37649] zookeeper.ZKUtil(162): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:32,989 WARN [RS:1;jenkins-hbase4:37649] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 23:15:32,989 INFO [RS:1;jenkins-hbase4:37649] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 23:15:32,989 DEBUG [RS:1;jenkins-hbase4:37649] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/WALs/jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:32,989 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33109,1689549332577] 2023-07-16 23:15:32,989 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37649,1689549332425] 2023-07-16 23:15:32,989 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39573,1689549332276] 2023-07-16 23:15:32,989 DEBUG [RS:2;jenkins-hbase4:33109] zookeeper.ZKUtil(162): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33109,1689549332577 2023-07-16 23:15:32,989 WARN [RS:2;jenkins-hbase4:33109] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 23:15:32,990 INFO [RS:2;jenkins-hbase4:33109] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 23:15:32,992 DEBUG [RS:2;jenkins-hbase4:33109] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/WALs/jenkins-hbase4.apache.org,33109,1689549332577 2023-07-16 23:15:32,995 DEBUG [RS:0;jenkins-hbase4:39573] zookeeper.ZKUtil(162): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33109,1689549332577 2023-07-16 23:15:32,996 DEBUG [RS:0;jenkins-hbase4:39573] zookeeper.ZKUtil(162): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:32,996 DEBUG [RS:1;jenkins-hbase4:37649] zookeeper.ZKUtil(162): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33109,1689549332577 2023-07-16 23:15:32,996 DEBUG [RS:0;jenkins-hbase4:39573] zookeeper.ZKUtil(162): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39573,1689549332276 2023-07-16 23:15:32,996 DEBUG [RS:1;jenkins-hbase4:37649] zookeeper.ZKUtil(162): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:32,997 DEBUG [RS:1;jenkins-hbase4:37649] zookeeper.ZKUtil(162): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39573,1689549332276 2023-07-16 23:15:32,997 DEBUG [RS:0;jenkins-hbase4:39573] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 23:15:32,997 INFO [RS:0;jenkins-hbase4:39573] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 23:15:32,999 DEBUG [RS:1;jenkins-hbase4:37649] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 23:15:32,999 INFO [RS:1;jenkins-hbase4:37649] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 23:15:33,005 INFO [RS:0;jenkins-hbase4:39573] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 23:15:33,005 INFO [RS:1;jenkins-hbase4:37649] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 23:15:33,013 INFO [RS:0;jenkins-hbase4:39573] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 23:15:33,013 INFO [RS:0;jenkins-hbase4:39573] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,019 INFO [RS:0;jenkins-hbase4:39573] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 23:15:33,019 INFO [RS:1;jenkins-hbase4:37649] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 23:15:33,019 INFO [RS:1;jenkins-hbase4:37649] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,020 INFO [RS:1;jenkins-hbase4:37649] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 23:15:33,023 INFO [RS:0;jenkins-hbase4:39573] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,023 DEBUG [RS:0;jenkins-hbase4:39573] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,023 DEBUG [RS:0;jenkins-hbase4:39573] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,023 DEBUG [RS:0;jenkins-hbase4:39573] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,023 DEBUG [RS:0;jenkins-hbase4:39573] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,023 INFO [RS:1;jenkins-hbase4:37649] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,023 DEBUG [RS:0;jenkins-hbase4:39573] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,025 DEBUG [RS:1;jenkins-hbase4:37649] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,025 DEBUG [RS:0;jenkins-hbase4:39573] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 23:15:33,025 DEBUG [RS:1;jenkins-hbase4:37649] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,025 DEBUG [RS:0;jenkins-hbase4:39573] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,025 DEBUG [RS:0;jenkins-hbase4:39573] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,025 DEBUG [RS:0;jenkins-hbase4:39573] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,025 DEBUG [RS:0;jenkins-hbase4:39573] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,025 DEBUG [RS:1;jenkins-hbase4:37649] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,025 DEBUG [RS:2;jenkins-hbase4:33109] zookeeper.ZKUtil(162): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33109,1689549332577 2023-07-16 23:15:33,025 DEBUG [RS:1;jenkins-hbase4:37649] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,026 DEBUG [RS:1;jenkins-hbase4:37649] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,026 DEBUG [RS:1;jenkins-hbase4:37649] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 23:15:33,026 DEBUG [RS:1;jenkins-hbase4:37649] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,026 DEBUG [RS:1;jenkins-hbase4:37649] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,026 DEBUG [RS:1;jenkins-hbase4:37649] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,026 DEBUG [RS:1;jenkins-hbase4:37649] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,026 DEBUG [RS:2;jenkins-hbase4:33109] zookeeper.ZKUtil(162): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:33,026 DEBUG [RS:2;jenkins-hbase4:33109] zookeeper.ZKUtil(162): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39573,1689549332276 2023-07-16 23:15:33,030 INFO [RS:0;jenkins-hbase4:39573] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,030 INFO [RS:0;jenkins-hbase4:39573] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,030 INFO [RS:0;jenkins-hbase4:39573] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,030 DEBUG [RS:2;jenkins-hbase4:33109] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 23:15:33,031 INFO [RS:2;jenkins-hbase4:33109] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 23:15:33,034 INFO [RS:1;jenkins-hbase4:37649] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,035 INFO [RS:1;jenkins-hbase4:37649] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,035 INFO [RS:1;jenkins-hbase4:37649] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,035 INFO [RS:2;jenkins-hbase4:33109] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 23:15:33,036 INFO [RS:2;jenkins-hbase4:33109] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 23:15:33,036 INFO [RS:2;jenkins-hbase4:33109] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,036 INFO [RS:2;jenkins-hbase4:33109] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 23:15:33,037 INFO [RS:2;jenkins-hbase4:33109] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,037 DEBUG [RS:2;jenkins-hbase4:33109] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,037 DEBUG [RS:2;jenkins-hbase4:33109] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,038 DEBUG [RS:2;jenkins-hbase4:33109] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,038 DEBUG [RS:2;jenkins-hbase4:33109] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,038 DEBUG [RS:2;jenkins-hbase4:33109] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,038 DEBUG [RS:2;jenkins-hbase4:33109] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 23:15:33,038 DEBUG [RS:2;jenkins-hbase4:33109] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,038 DEBUG [RS:2;jenkins-hbase4:33109] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,038 DEBUG [RS:2;jenkins-hbase4:33109] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,038 DEBUG [RS:2;jenkins-hbase4:33109] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:33,047 INFO [RS:2;jenkins-hbase4:33109] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,047 INFO [RS:2;jenkins-hbase4:33109] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,047 INFO [RS:2;jenkins-hbase4:33109] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,047 INFO [RS:1;jenkins-hbase4:37649] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 23:15:33,047 INFO [RS:1;jenkins-hbase4:37649] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37649,1689549332425-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,049 INFO [RS:0;jenkins-hbase4:39573] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 23:15:33,049 INFO [RS:0;jenkins-hbase4:39573] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39573,1689549332276-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,059 INFO [RS:2;jenkins-hbase4:33109] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 23:15:33,059 INFO [RS:2;jenkins-hbase4:33109] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33109,1689549332577-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,063 INFO [RS:1;jenkins-hbase4:37649] regionserver.Replication(203): jenkins-hbase4.apache.org,37649,1689549332425 started 2023-07-16 23:15:33,063 INFO [RS:1;jenkins-hbase4:37649] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37649,1689549332425, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37649, sessionid=0x101706b61700002 2023-07-16 23:15:33,063 DEBUG [RS:1;jenkins-hbase4:37649] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 23:15:33,064 DEBUG [RS:1;jenkins-hbase4:37649] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:33,064 DEBUG [RS:1;jenkins-hbase4:37649] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37649,1689549332425' 2023-07-16 23:15:33,064 DEBUG [RS:1;jenkins-hbase4:37649] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 23:15:33,064 DEBUG [RS:1;jenkins-hbase4:37649] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 23:15:33,065 INFO [RS:0;jenkins-hbase4:39573] regionserver.Replication(203): jenkins-hbase4.apache.org,39573,1689549332276 started 2023-07-16 23:15:33,065 DEBUG [RS:1;jenkins-hbase4:37649] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 23:15:33,065 INFO [RS:0;jenkins-hbase4:39573] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39573,1689549332276, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39573, sessionid=0x101706b61700001 2023-07-16 23:15:33,065 DEBUG [RS:1;jenkins-hbase4:37649] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 23:15:33,065 DEBUG [RS:1;jenkins-hbase4:37649] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:33,065 DEBUG [RS:0;jenkins-hbase4:39573] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 23:15:33,065 DEBUG [RS:1;jenkins-hbase4:37649] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37649,1689549332425' 2023-07-16 23:15:33,065 DEBUG [RS:1;jenkins-hbase4:37649] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 23:15:33,065 DEBUG [RS:0;jenkins-hbase4:39573] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39573,1689549332276 2023-07-16 23:15:33,065 DEBUG [RS:0;jenkins-hbase4:39573] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39573,1689549332276' 2023-07-16 23:15:33,065 DEBUG [RS:0;jenkins-hbase4:39573] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 23:15:33,065 DEBUG [RS:1;jenkins-hbase4:37649] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 23:15:33,066 DEBUG [RS:0;jenkins-hbase4:39573] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 23:15:33,066 DEBUG [RS:1;jenkins-hbase4:37649] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 23:15:33,066 INFO [RS:1;jenkins-hbase4:37649] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 23:15:33,066 INFO [RS:1;jenkins-hbase4:37649] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 23:15:33,066 DEBUG [RS:0;jenkins-hbase4:39573] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 23:15:33,066 DEBUG [RS:0;jenkins-hbase4:39573] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 23:15:33,066 DEBUG [RS:0;jenkins-hbase4:39573] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39573,1689549332276 2023-07-16 23:15:33,066 DEBUG [RS:0;jenkins-hbase4:39573] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39573,1689549332276' 2023-07-16 23:15:33,066 DEBUG [RS:0;jenkins-hbase4:39573] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 23:15:33,066 DEBUG [RS:0;jenkins-hbase4:39573] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 23:15:33,067 DEBUG [RS:0;jenkins-hbase4:39573] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 23:15:33,067 INFO [RS:0;jenkins-hbase4:39573] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 23:15:33,067 INFO [RS:0;jenkins-hbase4:39573] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 23:15:33,071 INFO [RS:2;jenkins-hbase4:33109] regionserver.Replication(203): jenkins-hbase4.apache.org,33109,1689549332577 started 2023-07-16 23:15:33,071 INFO [RS:2;jenkins-hbase4:33109] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33109,1689549332577, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33109, sessionid=0x101706b61700003 2023-07-16 23:15:33,071 DEBUG [RS:2;jenkins-hbase4:33109] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 23:15:33,071 DEBUG [RS:2;jenkins-hbase4:33109] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33109,1689549332577 2023-07-16 23:15:33,071 DEBUG [RS:2;jenkins-hbase4:33109] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33109,1689549332577' 2023-07-16 23:15:33,071 DEBUG [RS:2;jenkins-hbase4:33109] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 23:15:33,072 DEBUG [RS:2;jenkins-hbase4:33109] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 23:15:33,072 DEBUG [RS:2;jenkins-hbase4:33109] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 23:15:33,072 DEBUG [RS:2;jenkins-hbase4:33109] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 23:15:33,072 DEBUG [RS:2;jenkins-hbase4:33109] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33109,1689549332577 2023-07-16 23:15:33,072 DEBUG [RS:2;jenkins-hbase4:33109] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33109,1689549332577' 2023-07-16 23:15:33,072 DEBUG [RS:2;jenkins-hbase4:33109] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 23:15:33,072 DEBUG [RS:2;jenkins-hbase4:33109] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 23:15:33,076 DEBUG [RS:2;jenkins-hbase4:33109] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 23:15:33,076 INFO [RS:2;jenkins-hbase4:33109] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 23:15:33,076 INFO [RS:2;jenkins-hbase4:33109] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 23:15:33,096 DEBUG [jenkins-hbase4:45129] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-16 23:15:33,097 DEBUG [jenkins-hbase4:45129] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:33,097 DEBUG [jenkins-hbase4:45129] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:33,097 DEBUG [jenkins-hbase4:45129] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:33,097 DEBUG [jenkins-hbase4:45129] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:15:33,097 DEBUG [jenkins-hbase4:45129] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:33,100 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33109,1689549332577, state=OPENING 2023-07-16 23:15:33,101 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-16 23:15:33,102 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:33,103 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 23:15:33,103 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33109,1689549332577}] 2023-07-16 23:15:33,168 INFO [RS:1;jenkins-hbase4:37649] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37649%2C1689549332425, suffix=, logDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/WALs/jenkins-hbase4.apache.org,37649,1689549332425, archiveDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/oldWALs, maxLogs=32 2023-07-16 23:15:33,168 INFO [RS:0;jenkins-hbase4:39573] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39573%2C1689549332276, suffix=, logDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/WALs/jenkins-hbase4.apache.org,39573,1689549332276, archiveDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/oldWALs, maxLogs=32 2023-07-16 23:15:33,174 WARN [ReadOnlyZKClient-127.0.0.1:51389@0x33538665] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-16 23:15:33,175 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45129,1689549332096] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 23:15:33,176 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53626, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 23:15:33,177 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33109] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:53626 deadline: 1689549393176, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,33109,1689549332577 2023-07-16 23:15:33,178 INFO [RS:2;jenkins-hbase4:33109] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33109%2C1689549332577, suffix=, logDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/WALs/jenkins-hbase4.apache.org,33109,1689549332577, archiveDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/oldWALs, maxLogs=32 2023-07-16 23:15:33,188 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38277,DS-ff08177b-5d92-4d13-8401-e64693c8a26c,DISK] 2023-07-16 23:15:33,189 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39973,DS-973717e4-5fc5-4800-b515-00829bd200b6,DISK] 2023-07-16 23:15:33,189 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39973,DS-973717e4-5fc5-4800-b515-00829bd200b6,DISK] 2023-07-16 23:15:33,189 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35851,DS-1425cc32-22f2-4ace-81d9-6ff3f5abef70,DISK] 2023-07-16 23:15:33,190 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38277,DS-ff08177b-5d92-4d13-8401-e64693c8a26c,DISK] 2023-07-16 23:15:33,190 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35851,DS-1425cc32-22f2-4ace-81d9-6ff3f5abef70,DISK] 2023-07-16 23:15:33,192 INFO [RS:0;jenkins-hbase4:39573] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/WALs/jenkins-hbase4.apache.org,39573,1689549332276/jenkins-hbase4.apache.org%2C39573%2C1689549332276.1689549333169 2023-07-16 23:15:33,195 DEBUG [RS:0;jenkins-hbase4:39573] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39973,DS-973717e4-5fc5-4800-b515-00829bd200b6,DISK], DatanodeInfoWithStorage[127.0.0.1:35851,DS-1425cc32-22f2-4ace-81d9-6ff3f5abef70,DISK], DatanodeInfoWithStorage[127.0.0.1:38277,DS-ff08177b-5d92-4d13-8401-e64693c8a26c,DISK]] 2023-07-16 23:15:33,195 INFO [RS:1;jenkins-hbase4:37649] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/WALs/jenkins-hbase4.apache.org,37649,1689549332425/jenkins-hbase4.apache.org%2C37649%2C1689549332425.1689549333169 2023-07-16 23:15:33,195 DEBUG [RS:1;jenkins-hbase4:37649] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39973,DS-973717e4-5fc5-4800-b515-00829bd200b6,DISK], DatanodeInfoWithStorage[127.0.0.1:38277,DS-ff08177b-5d92-4d13-8401-e64693c8a26c,DISK], DatanodeInfoWithStorage[127.0.0.1:35851,DS-1425cc32-22f2-4ace-81d9-6ff3f5abef70,DISK]] 2023-07-16 23:15:33,203 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35851,DS-1425cc32-22f2-4ace-81d9-6ff3f5abef70,DISK] 2023-07-16 23:15:33,204 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38277,DS-ff08177b-5d92-4d13-8401-e64693c8a26c,DISK] 2023-07-16 23:15:33,204 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39973,DS-973717e4-5fc5-4800-b515-00829bd200b6,DISK] 2023-07-16 23:15:33,205 INFO [RS:2;jenkins-hbase4:33109] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/WALs/jenkins-hbase4.apache.org,33109,1689549332577/jenkins-hbase4.apache.org%2C33109%2C1689549332577.1689549333178 2023-07-16 23:15:33,206 DEBUG [RS:2;jenkins-hbase4:33109] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35851,DS-1425cc32-22f2-4ace-81d9-6ff3f5abef70,DISK], DatanodeInfoWithStorage[127.0.0.1:38277,DS-ff08177b-5d92-4d13-8401-e64693c8a26c,DISK], DatanodeInfoWithStorage[127.0.0.1:39973,DS-973717e4-5fc5-4800-b515-00829bd200b6,DISK]] 2023-07-16 23:15:33,258 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33109,1689549332577 2023-07-16 23:15:33,259 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 23:15:33,261 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53640, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 23:15:33,265 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-16 23:15:33,265 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 23:15:33,266 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33109%2C1689549332577.meta, suffix=.meta, logDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/WALs/jenkins-hbase4.apache.org,33109,1689549332577, archiveDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/oldWALs, maxLogs=32 2023-07-16 23:15:33,282 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39973,DS-973717e4-5fc5-4800-b515-00829bd200b6,DISK] 2023-07-16 23:15:33,282 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38277,DS-ff08177b-5d92-4d13-8401-e64693c8a26c,DISK] 2023-07-16 23:15:33,283 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35851,DS-1425cc32-22f2-4ace-81d9-6ff3f5abef70,DISK] 2023-07-16 23:15:33,285 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/WALs/jenkins-hbase4.apache.org,33109,1689549332577/jenkins-hbase4.apache.org%2C33109%2C1689549332577.meta.1689549333267.meta 2023-07-16 23:15:33,285 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38277,DS-ff08177b-5d92-4d13-8401-e64693c8a26c,DISK], DatanodeInfoWithStorage[127.0.0.1:35851,DS-1425cc32-22f2-4ace-81d9-6ff3f5abef70,DISK], DatanodeInfoWithStorage[127.0.0.1:39973,DS-973717e4-5fc5-4800-b515-00829bd200b6,DISK]] 2023-07-16 23:15:33,285 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:33,285 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 23:15:33,285 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-16 23:15:33,285 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-16 23:15:33,286 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-16 23:15:33,286 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:33,286 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-16 23:15:33,286 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-16 23:15:33,290 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 23:15:33,291 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/info 2023-07-16 23:15:33,291 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/info 2023-07-16 23:15:33,291 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 23:15:33,292 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:33,292 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 23:15:33,293 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/rep_barrier 2023-07-16 23:15:33,293 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/rep_barrier 2023-07-16 23:15:33,293 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 23:15:33,294 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:33,294 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 23:15:33,295 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/table 2023-07-16 23:15:33,295 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/table 2023-07-16 23:15:33,296 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 23:15:33,296 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:33,297 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740 2023-07-16 23:15:33,298 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740 2023-07-16 23:15:33,300 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 23:15:33,301 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 23:15:33,302 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11463685600, jitterRate=0.06763891875743866}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 23:15:33,302 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 23:15:33,303 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689549333258 2023-07-16 23:15:33,307 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-16 23:15:33,308 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-16 23:15:33,308 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33109,1689549332577, state=OPEN 2023-07-16 23:15:33,310 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 23:15:33,310 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 23:15:33,312 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-16 23:15:33,312 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33109,1689549332577 in 207 msec 2023-07-16 23:15:33,313 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-16 23:15:33,313 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 368 msec 2023-07-16 23:15:33,315 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 445 msec 2023-07-16 23:15:33,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689549333315, completionTime=-1 2023-07-16 23:15:33,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-16 23:15:33,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-16 23:15:33,320 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-16 23:15:33,320 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689549393320 2023-07-16 23:15:33,320 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689549453320 2023-07-16 23:15:33,320 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-07-16 23:15:33,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45129,1689549332096-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45129,1689549332096-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45129,1689549332096-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:45129, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:33,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-16 23:15:33,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:33,328 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-16 23:15:33,328 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-16 23:15:33,330 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 23:15:33,330 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 23:15:33,332 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp/data/hbase/namespace/95e5611863563cc6568d4edec65b3ad1 2023-07-16 23:15:33,332 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp/data/hbase/namespace/95e5611863563cc6568d4edec65b3ad1 empty. 2023-07-16 23:15:33,333 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp/data/hbase/namespace/95e5611863563cc6568d4edec65b3ad1 2023-07-16 23:15:33,333 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-16 23:15:33,347 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-16 23:15:33,348 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 95e5611863563cc6568d4edec65b3ad1, NAME => 'hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp 2023-07-16 23:15:33,356 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:33,356 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 95e5611863563cc6568d4edec65b3ad1, disabling compactions & flushes 2023-07-16 23:15:33,356 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1. 2023-07-16 23:15:33,356 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1. 2023-07-16 23:15:33,356 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1. after waiting 0 ms 2023-07-16 23:15:33,356 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1. 2023-07-16 23:15:33,356 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1. 2023-07-16 23:15:33,356 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 95e5611863563cc6568d4edec65b3ad1: 2023-07-16 23:15:33,358 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 23:15:33,359 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689549333359"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549333359"}]},"ts":"1689549333359"} 2023-07-16 23:15:33,362 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 23:15:33,362 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 23:15:33,363 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549333362"}]},"ts":"1689549333362"} 2023-07-16 23:15:33,364 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-16 23:15:33,367 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:33,367 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:33,367 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:33,367 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:15:33,367 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:33,367 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=95e5611863563cc6568d4edec65b3ad1, ASSIGN}] 2023-07-16 23:15:33,369 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=95e5611863563cc6568d4edec65b3ad1, ASSIGN 2023-07-16 23:15:33,370 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=95e5611863563cc6568d4edec65b3ad1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39573,1689549332276; forceNewPlan=false, retain=false 2023-07-16 23:15:33,481 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45129,1689549332096] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:33,483 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45129,1689549332096] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-16 23:15:33,484 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 23:15:33,485 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 23:15:33,487 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp/data/hbase/rsgroup/783f0cc50654ddad1c9b50ae8b44cfa6 2023-07-16 23:15:33,487 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp/data/hbase/rsgroup/783f0cc50654ddad1c9b50ae8b44cfa6 empty. 2023-07-16 23:15:33,488 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp/data/hbase/rsgroup/783f0cc50654ddad1c9b50ae8b44cfa6 2023-07-16 23:15:33,488 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-16 23:15:33,499 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-16 23:15:33,500 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 783f0cc50654ddad1c9b50ae8b44cfa6, NAME => 'hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp 2023-07-16 23:15:33,514 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:33,515 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 783f0cc50654ddad1c9b50ae8b44cfa6, disabling compactions & flushes 2023-07-16 23:15:33,515 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6. 2023-07-16 23:15:33,515 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6. 2023-07-16 23:15:33,515 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6. after waiting 0 ms 2023-07-16 23:15:33,515 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6. 2023-07-16 23:15:33,515 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6. 2023-07-16 23:15:33,515 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 783f0cc50654ddad1c9b50ae8b44cfa6: 2023-07-16 23:15:33,517 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 23:15:33,518 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689549333518"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549333518"}]},"ts":"1689549333518"} 2023-07-16 23:15:33,520 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 23:15:33,520 INFO [jenkins-hbase4:45129] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 23:15:33,521 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 23:15:33,521 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=95e5611863563cc6568d4edec65b3ad1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39573,1689549332276 2023-07-16 23:15:33,522 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549333521"}]},"ts":"1689549333521"} 2023-07-16 23:15:33,522 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689549333521"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549333521"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549333521"}]},"ts":"1689549333521"} 2023-07-16 23:15:33,523 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-16 23:15:33,523 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure 95e5611863563cc6568d4edec65b3ad1, server=jenkins-hbase4.apache.org,39573,1689549332276}] 2023-07-16 23:15:33,526 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-16 23:15:33,534 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:33,534 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:33,534 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:33,534 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:15:33,534 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:33,534 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=783f0cc50654ddad1c9b50ae8b44cfa6, ASSIGN}] 2023-07-16 23:15:33,535 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=783f0cc50654ddad1c9b50ae8b44cfa6, ASSIGN 2023-07-16 23:15:33,535 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=783f0cc50654ddad1c9b50ae8b44cfa6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33109,1689549332577; forceNewPlan=false, retain=false 2023-07-16 23:15:33,676 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39573,1689549332276 2023-07-16 23:15:33,676 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 23:15:33,678 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49080, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 23:15:33,682 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1. 2023-07-16 23:15:33,682 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 95e5611863563cc6568d4edec65b3ad1, NAME => 'hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:33,683 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 95e5611863563cc6568d4edec65b3ad1 2023-07-16 23:15:33,683 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:33,683 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 95e5611863563cc6568d4edec65b3ad1 2023-07-16 23:15:33,683 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 95e5611863563cc6568d4edec65b3ad1 2023-07-16 23:15:33,684 INFO [StoreOpener-95e5611863563cc6568d4edec65b3ad1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 95e5611863563cc6568d4edec65b3ad1 2023-07-16 23:15:33,686 INFO [jenkins-hbase4:45129] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 23:15:33,687 DEBUG [StoreOpener-95e5611863563cc6568d4edec65b3ad1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/namespace/95e5611863563cc6568d4edec65b3ad1/info 2023-07-16 23:15:33,687 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=783f0cc50654ddad1c9b50ae8b44cfa6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33109,1689549332577 2023-07-16 23:15:33,687 DEBUG [StoreOpener-95e5611863563cc6568d4edec65b3ad1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/namespace/95e5611863563cc6568d4edec65b3ad1/info 2023-07-16 23:15:33,687 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689549333687"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549333687"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549333687"}]},"ts":"1689549333687"} 2023-07-16 23:15:33,688 INFO [StoreOpener-95e5611863563cc6568d4edec65b3ad1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 95e5611863563cc6568d4edec65b3ad1 columnFamilyName info 2023-07-16 23:15:33,688 INFO [StoreOpener-95e5611863563cc6568d4edec65b3ad1-1] regionserver.HStore(310): Store=95e5611863563cc6568d4edec65b3ad1/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:33,689 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/namespace/95e5611863563cc6568d4edec65b3ad1 2023-07-16 23:15:33,689 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure 783f0cc50654ddad1c9b50ae8b44cfa6, server=jenkins-hbase4.apache.org,33109,1689549332577}] 2023-07-16 23:15:33,689 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/namespace/95e5611863563cc6568d4edec65b3ad1 2023-07-16 23:15:33,692 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 95e5611863563cc6568d4edec65b3ad1 2023-07-16 23:15:33,694 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/namespace/95e5611863563cc6568d4edec65b3ad1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:33,695 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 95e5611863563cc6568d4edec65b3ad1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10621106400, jitterRate=-0.01083238422870636}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:33,695 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 95e5611863563cc6568d4edec65b3ad1: 2023-07-16 23:15:33,695 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1., pid=7, masterSystemTime=1689549333676 2023-07-16 23:15:33,699 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1. 2023-07-16 23:15:33,699 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1. 2023-07-16 23:15:33,700 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=95e5611863563cc6568d4edec65b3ad1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39573,1689549332276 2023-07-16 23:15:33,700 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689549333700"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549333700"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549333700"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549333700"}]},"ts":"1689549333700"} 2023-07-16 23:15:33,702 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-16 23:15:33,703 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure 95e5611863563cc6568d4edec65b3ad1, server=jenkins-hbase4.apache.org,39573,1689549332276 in 178 msec 2023-07-16 23:15:33,704 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-16 23:15:33,704 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=95e5611863563cc6568d4edec65b3ad1, ASSIGN in 336 msec 2023-07-16 23:15:33,705 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 23:15:33,705 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549333705"}]},"ts":"1689549333705"} 2023-07-16 23:15:33,706 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-16 23:15:33,709 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 23:15:33,710 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 382 msec 2023-07-16 23:15:33,731 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-16 23:15:33,732 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-16 23:15:33,732 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:33,735 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 23:15:33,736 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49086, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 23:15:33,739 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-16 23:15:33,748 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 23:15:33,751 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-07-16 23:15:33,761 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-16 23:15:33,765 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-16 23:15:33,766 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-16 23:15:33,845 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6. 2023-07-16 23:15:33,845 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 783f0cc50654ddad1c9b50ae8b44cfa6, NAME => 'hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:33,845 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 23:15:33,845 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6. service=MultiRowMutationService 2023-07-16 23:15:33,845 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-16 23:15:33,845 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 783f0cc50654ddad1c9b50ae8b44cfa6 2023-07-16 23:15:33,845 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:33,845 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 783f0cc50654ddad1c9b50ae8b44cfa6 2023-07-16 23:15:33,845 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 783f0cc50654ddad1c9b50ae8b44cfa6 2023-07-16 23:15:33,847 INFO [StoreOpener-783f0cc50654ddad1c9b50ae8b44cfa6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 783f0cc50654ddad1c9b50ae8b44cfa6 2023-07-16 23:15:33,848 DEBUG [StoreOpener-783f0cc50654ddad1c9b50ae8b44cfa6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/rsgroup/783f0cc50654ddad1c9b50ae8b44cfa6/m 2023-07-16 23:15:33,848 DEBUG [StoreOpener-783f0cc50654ddad1c9b50ae8b44cfa6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/rsgroup/783f0cc50654ddad1c9b50ae8b44cfa6/m 2023-07-16 23:15:33,848 INFO [StoreOpener-783f0cc50654ddad1c9b50ae8b44cfa6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 783f0cc50654ddad1c9b50ae8b44cfa6 columnFamilyName m 2023-07-16 23:15:33,849 INFO [StoreOpener-783f0cc50654ddad1c9b50ae8b44cfa6-1] regionserver.HStore(310): Store=783f0cc50654ddad1c9b50ae8b44cfa6/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:33,850 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/rsgroup/783f0cc50654ddad1c9b50ae8b44cfa6 2023-07-16 23:15:33,851 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/rsgroup/783f0cc50654ddad1c9b50ae8b44cfa6 2023-07-16 23:15:33,853 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 783f0cc50654ddad1c9b50ae8b44cfa6 2023-07-16 23:15:33,855 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/rsgroup/783f0cc50654ddad1c9b50ae8b44cfa6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:33,855 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 783f0cc50654ddad1c9b50ae8b44cfa6; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@29f4dc00, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:33,855 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 783f0cc50654ddad1c9b50ae8b44cfa6: 2023-07-16 23:15:33,856 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6., pid=9, masterSystemTime=1689549333841 2023-07-16 23:15:33,857 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6. 2023-07-16 23:15:33,857 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6. 2023-07-16 23:15:33,858 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=783f0cc50654ddad1c9b50ae8b44cfa6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33109,1689549332577 2023-07-16 23:15:33,858 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689549333858"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549333858"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549333858"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549333858"}]},"ts":"1689549333858"} 2023-07-16 23:15:33,860 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-16 23:15:33,860 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure 783f0cc50654ddad1c9b50ae8b44cfa6, server=jenkins-hbase4.apache.org,33109,1689549332577 in 170 msec 2023-07-16 23:15:33,862 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-16 23:15:33,862 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=783f0cc50654ddad1c9b50ae8b44cfa6, ASSIGN in 326 msec 2023-07-16 23:15:33,872 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 23:15:33,877 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 115 msec 2023-07-16 23:15:33,877 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 23:15:33,877 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549333877"}]},"ts":"1689549333877"} 2023-07-16 23:15:33,879 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-16 23:15:33,886 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-16 23:15:33,887 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 23:15:33,889 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 406 msec 2023-07-16 23:15:33,890 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-16 23:15:33,890 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.160sec 2023-07-16 23:15:33,890 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-16 23:15:33,890 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-16 23:15:33,890 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-16 23:15:33,890 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45129,1689549332096-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-16 23:15:33,890 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45129,1689549332096-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-16 23:15:33,894 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-16 23:15:33,921 DEBUG [Listener at localhost/45635] zookeeper.ReadOnlyZKClient(139): Connect 0x239401c3 to 127.0.0.1:51389 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:15:33,926 DEBUG [Listener at localhost/45635] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3ba0e88a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:15:33,928 DEBUG [hconnection-0x3387784e-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 23:15:33,930 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53648, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 23:15:33,931 INFO [Listener at localhost/45635] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,45129,1689549332096 2023-07-16 23:15:33,931 INFO [Listener at localhost/45635] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:33,986 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45129,1689549332096] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-16 23:15:33,987 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45129,1689549332096] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-16 23:15:33,991 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:33,991 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45129,1689549332096] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:33,993 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45129,1689549332096] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 23:15:33,995 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45129,1689549332096] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-16 23:15:34,034 DEBUG [Listener at localhost/45635] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-16 23:15:34,036 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35080, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-16 23:15:34,039 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-16 23:15:34,039 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:34,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-16 23:15:34,040 DEBUG [Listener at localhost/45635] zookeeper.ReadOnlyZKClient(139): Connect 0x74a9c272 to 127.0.0.1:51389 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:15:34,045 DEBUG [Listener at localhost/45635] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7ce481a1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:15:34,046 INFO [Listener at localhost/45635] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:51389 2023-07-16 23:15:34,050 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 23:15:34,051 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101706b6170000a connected 2023-07-16 23:15:34,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:34,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:34,056 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-16 23:15:34,067 INFO [Listener at localhost/45635] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 23:15:34,067 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:34,067 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:34,067 INFO [Listener at localhost/45635] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 23:15:34,068 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 23:15:34,068 INFO [Listener at localhost/45635] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 23:15:34,068 INFO [Listener at localhost/45635] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 23:15:34,068 INFO [Listener at localhost/45635] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35517 2023-07-16 23:15:34,069 INFO [Listener at localhost/45635] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 23:15:34,070 DEBUG [Listener at localhost/45635] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 23:15:34,070 INFO [Listener at localhost/45635] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:34,071 INFO [Listener at localhost/45635] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 23:15:34,072 INFO [Listener at localhost/45635] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35517 connecting to ZooKeeper ensemble=127.0.0.1:51389 2023-07-16 23:15:34,075 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:355170x0, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 23:15:34,077 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35517-0x101706b6170000b connected 2023-07-16 23:15:34,077 DEBUG [Listener at localhost/45635] zookeeper.ZKUtil(162): regionserver:35517-0x101706b6170000b, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 23:15:34,078 DEBUG [Listener at localhost/45635] zookeeper.ZKUtil(162): regionserver:35517-0x101706b6170000b, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-16 23:15:34,079 DEBUG [Listener at localhost/45635] zookeeper.ZKUtil(164): regionserver:35517-0x101706b6170000b, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 23:15:34,082 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35517 2023-07-16 23:15:34,083 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35517 2023-07-16 23:15:34,083 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35517 2023-07-16 23:15:34,083 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35517 2023-07-16 23:15:34,083 DEBUG [Listener at localhost/45635] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35517 2023-07-16 23:15:34,085 INFO [Listener at localhost/45635] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 23:15:34,086 INFO [Listener at localhost/45635] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 23:15:34,086 INFO [Listener at localhost/45635] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 23:15:34,086 INFO [Listener at localhost/45635] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 23:15:34,086 INFO [Listener at localhost/45635] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 23:15:34,086 INFO [Listener at localhost/45635] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 23:15:34,087 INFO [Listener at localhost/45635] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 23:15:34,087 INFO [Listener at localhost/45635] http.HttpServer(1146): Jetty bound to port 33199 2023-07-16 23:15:34,087 INFO [Listener at localhost/45635] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 23:15:34,090 INFO [Listener at localhost/45635] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:34,091 INFO [Listener at localhost/45635] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@68a8faa9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/hadoop.log.dir/,AVAILABLE} 2023-07-16 23:15:34,091 INFO [Listener at localhost/45635] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:34,091 INFO [Listener at localhost/45635] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1da83b10{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 23:15:34,203 INFO [Listener at localhost/45635] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 23:15:34,203 INFO [Listener at localhost/45635] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 23:15:34,204 INFO [Listener at localhost/45635] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 23:15:34,204 INFO [Listener at localhost/45635] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 23:15:34,204 INFO [Listener at localhost/45635] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 23:15:34,205 INFO [Listener at localhost/45635] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6bdba9f2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/java.io.tmpdir/jetty-0_0_0_0-33199-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6304244914913791609/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:15:34,207 INFO [Listener at localhost/45635] server.AbstractConnector(333): Started ServerConnector@4975709e{HTTP/1.1, (http/1.1)}{0.0.0.0:33199} 2023-07-16 23:15:34,207 INFO [Listener at localhost/45635] server.Server(415): Started @45831ms 2023-07-16 23:15:34,209 INFO [RS:3;jenkins-hbase4:35517] regionserver.HRegionServer(951): ClusterId : 754cdeef-e017-491b-8b71-5f8b38598b77 2023-07-16 23:15:34,209 DEBUG [RS:3;jenkins-hbase4:35517] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 23:15:34,212 DEBUG [RS:3;jenkins-hbase4:35517] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 23:15:34,212 DEBUG [RS:3;jenkins-hbase4:35517] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 23:15:34,214 DEBUG [RS:3;jenkins-hbase4:35517] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 23:15:34,217 DEBUG [RS:3;jenkins-hbase4:35517] zookeeper.ReadOnlyZKClient(139): Connect 0x788623fd to 127.0.0.1:51389 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 23:15:34,222 DEBUG [RS:3;jenkins-hbase4:35517] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7641d29d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 23:15:34,222 DEBUG [RS:3;jenkins-hbase4:35517] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@727e488d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 23:15:34,231 DEBUG [RS:3;jenkins-hbase4:35517] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:35517 2023-07-16 23:15:34,231 INFO [RS:3;jenkins-hbase4:35517] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 23:15:34,231 INFO [RS:3;jenkins-hbase4:35517] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 23:15:34,231 DEBUG [RS:3;jenkins-hbase4:35517] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 23:15:34,231 INFO [RS:3;jenkins-hbase4:35517] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45129,1689549332096 with isa=jenkins-hbase4.apache.org/172.31.14.131:35517, startcode=1689549334067 2023-07-16 23:15:34,231 DEBUG [RS:3;jenkins-hbase4:35517] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 23:15:34,234 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44607, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 23:15:34,234 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45129] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35517,1689549334067 2023-07-16 23:15:34,234 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45129,1689549332096] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 23:15:34,234 DEBUG [RS:3;jenkins-hbase4:35517] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2 2023-07-16 23:15:34,234 DEBUG [RS:3;jenkins-hbase4:35517] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43549 2023-07-16 23:15:34,234 DEBUG [RS:3;jenkins-hbase4:35517] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34471 2023-07-16 23:15:34,239 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:34,239 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:34,239 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:34,239 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:34,239 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45129,1689549332096] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:34,239 DEBUG [RS:3;jenkins-hbase4:35517] zookeeper.ZKUtil(162): regionserver:35517-0x101706b6170000b, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35517,1689549334067 2023-07-16 23:15:34,239 WARN [RS:3;jenkins-hbase4:35517] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 23:15:34,239 INFO [RS:3;jenkins-hbase4:35517] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 23:15:34,239 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35517,1689549334067] 2023-07-16 23:15:34,239 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33109,1689549332577 2023-07-16 23:15:34,239 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45129,1689549332096] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 23:15:34,240 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33109,1689549332577 2023-07-16 23:15:34,239 DEBUG [RS:3;jenkins-hbase4:35517] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/WALs/jenkins-hbase4.apache.org,35517,1689549334067 2023-07-16 23:15:34,239 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33109,1689549332577 2023-07-16 23:15:34,240 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:34,242 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45129,1689549332096] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-16 23:15:34,242 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:34,242 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:34,242 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39573,1689549332276 2023-07-16 23:15:34,243 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39573,1689549332276 2023-07-16 23:15:34,243 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39573,1689549332276 2023-07-16 23:15:34,243 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35517,1689549334067 2023-07-16 23:15:34,243 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35517,1689549334067 2023-07-16 23:15:34,243 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35517,1689549334067 2023-07-16 23:15:34,244 DEBUG [RS:3;jenkins-hbase4:35517] zookeeper.ZKUtil(162): regionserver:35517-0x101706b6170000b, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33109,1689549332577 2023-07-16 23:15:34,244 DEBUG [RS:3;jenkins-hbase4:35517] zookeeper.ZKUtil(162): regionserver:35517-0x101706b6170000b, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:34,245 DEBUG [RS:3;jenkins-hbase4:35517] zookeeper.ZKUtil(162): regionserver:35517-0x101706b6170000b, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39573,1689549332276 2023-07-16 23:15:34,245 DEBUG [RS:3;jenkins-hbase4:35517] zookeeper.ZKUtil(162): regionserver:35517-0x101706b6170000b, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35517,1689549334067 2023-07-16 23:15:34,246 DEBUG [RS:3;jenkins-hbase4:35517] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 23:15:34,246 INFO [RS:3;jenkins-hbase4:35517] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 23:15:34,247 INFO [RS:3;jenkins-hbase4:35517] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 23:15:34,247 INFO [RS:3;jenkins-hbase4:35517] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 23:15:34,247 INFO [RS:3;jenkins-hbase4:35517] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:34,247 INFO [RS:3;jenkins-hbase4:35517] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 23:15:34,249 INFO [RS:3;jenkins-hbase4:35517] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:34,249 DEBUG [RS:3;jenkins-hbase4:35517] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:34,249 DEBUG [RS:3;jenkins-hbase4:35517] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:34,249 DEBUG [RS:3;jenkins-hbase4:35517] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:34,249 DEBUG [RS:3;jenkins-hbase4:35517] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:34,249 DEBUG [RS:3;jenkins-hbase4:35517] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:34,249 DEBUG [RS:3;jenkins-hbase4:35517] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 23:15:34,249 DEBUG [RS:3;jenkins-hbase4:35517] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:34,249 DEBUG [RS:3;jenkins-hbase4:35517] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:34,249 DEBUG [RS:3;jenkins-hbase4:35517] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:34,249 DEBUG [RS:3;jenkins-hbase4:35517] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 23:15:34,251 INFO [RS:3;jenkins-hbase4:35517] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:34,251 INFO [RS:3;jenkins-hbase4:35517] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:34,251 INFO [RS:3;jenkins-hbase4:35517] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:34,262 INFO [RS:3;jenkins-hbase4:35517] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 23:15:34,262 INFO [RS:3;jenkins-hbase4:35517] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35517,1689549334067-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 23:15:34,272 INFO [RS:3;jenkins-hbase4:35517] regionserver.Replication(203): jenkins-hbase4.apache.org,35517,1689549334067 started 2023-07-16 23:15:34,272 INFO [RS:3;jenkins-hbase4:35517] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35517,1689549334067, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35517, sessionid=0x101706b6170000b 2023-07-16 23:15:34,272 DEBUG [RS:3;jenkins-hbase4:35517] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 23:15:34,272 DEBUG [RS:3;jenkins-hbase4:35517] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35517,1689549334067 2023-07-16 23:15:34,272 DEBUG [RS:3;jenkins-hbase4:35517] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35517,1689549334067' 2023-07-16 23:15:34,272 DEBUG [RS:3;jenkins-hbase4:35517] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 23:15:34,273 DEBUG [RS:3;jenkins-hbase4:35517] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 23:15:34,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:34,273 DEBUG [RS:3;jenkins-hbase4:35517] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 23:15:34,273 DEBUG [RS:3;jenkins-hbase4:35517] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 23:15:34,273 DEBUG [RS:3;jenkins-hbase4:35517] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35517,1689549334067 2023-07-16 23:15:34,273 DEBUG [RS:3;jenkins-hbase4:35517] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35517,1689549334067' 2023-07-16 23:15:34,273 DEBUG [RS:3;jenkins-hbase4:35517] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 23:15:34,273 DEBUG [RS:3;jenkins-hbase4:35517] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 23:15:34,274 DEBUG [RS:3;jenkins-hbase4:35517] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 23:15:34,274 INFO [RS:3;jenkins-hbase4:35517] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 23:15:34,274 INFO [RS:3;jenkins-hbase4:35517] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 23:15:34,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:34,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:34,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:34,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:34,281 DEBUG [hconnection-0x77684c01-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 23:15:34,282 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 23:15:34,282 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-16 23:15:34,282 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53650, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 23:15:34,283 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 23:15:34,283 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-16 23:15:34,283 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 23:15:34,283 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-16 23:15:34,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:34,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:34,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45129] to rsgroup master 2023-07-16 23:15:34,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:34,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:35080 deadline: 1689550534290, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. 2023-07-16 23:15:34,290 WARN [Listener at localhost/45635] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:34,291 INFO [Listener at localhost/45635] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:34,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:34,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:34,292 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33109, jenkins-hbase4.apache.org:35517, jenkins-hbase4.apache.org:37649, jenkins-hbase4.apache.org:39573], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:34,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:34,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:34,343 INFO [Listener at localhost/45635] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=563 (was 513) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/MasterData-prefix:jenkins-hbase4.apache.org,45129,1689549332096 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-16969666-172.31.14.131-1689549331350:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 42007 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:45129 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1185110845-2323 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=33109 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:37199 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 43549 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: jenkins-hbase4:35517Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58149@0x256a9099-SendThread(127.0.0.1:58149) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: qtp105611802-2585-acceptor-0@548ff880-ServerConnector@4975709e{HTTP/1.1, (http/1.1)}{0.0.0.0:33199} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33109 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: CacheReplicationMonitor(488663392) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: IPC Server handler 0 on default port 42591 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/45635-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp407370416-2249 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp105611802-2591 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2-prefix:jenkins-hbase4.apache.org,37649,1689549332425 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@4a4dbde9 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp351670191-2313 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1690772335_17 at /127.0.0.1:46738 [Receiving block BP-16969666-172.31.14.131-1689549331350:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp224989267-2282 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1185110845-2322 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3387784e-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1185110845-2324 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-16969666-172.31.14.131-1689549331350:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1690772335_17 at /127.0.0.1:51334 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x185fde6c-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 42007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 3 on default port 43549 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34891,1689549326627 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x74a9c272 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/361900993.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp105611802-2587 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-16969666-172.31.14.131-1689549331350:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1460984227_17 at /127.0.0.1:51392 [Receiving block BP-16969666-172.31.14.131-1689549331350:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/dfs/data/data1/current/BP-16969666-172.31.14.131-1689549331350 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp224989267-2278 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 45635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 45635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp407370416-2248 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ForkJoinPool-2-worker-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: Listener at localhost/45635-SendThread(127.0.0.1:51389) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp105611802-2588 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:37199 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:35517 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x5b80f6a2-SendThread(127.0.0.1:51389) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@5f680cfd java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x77684c01-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 45635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2-prefix:jenkins-hbase4.apache.org,39573,1689549332276 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-945371548_17 at /127.0.0.1:46768 [Receiving block BP-16969666-172.31.14.131-1689549331350:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1285969979-2219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1185110845-2317 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x239401c3-SendThread(127.0.0.1:51389) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x788623fd-SendThread(127.0.0.1:51389) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 1 on default port 43549 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45129 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:43549 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@369cd471[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=33109 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/dfs/data/data4/current/BP-16969666-172.31.14.131-1689549331350 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=45129 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@5059bf58[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:33109 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1285969979-2220 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45635-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:43549 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp351670191-2310 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp407370416-2247-acceptor-0@11ae3496-ServerConnector@46225b1{HTTP/1.1, (http/1.1)}{0.0.0.0:38319} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x185fde6c-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1285969979-2216-acceptor-0@26e084b2-ServerConnector@5fb63076{HTTP/1.1, (http/1.1)}{0.0.0.0:34471} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp351670191-2309 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@6baa02e2[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=33109 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689549332884 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1014544846_17 at /127.0.0.1:46790 [Receiving block BP-16969666-172.31.14.131-1689549331350:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@97f7e85 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-96a53c-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45635-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp224989267-2276 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:43549 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp105611802-2590 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-2cb34b7c-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS:3;jenkins-hbase4:35517-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3561b059 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x185fde6c-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x788623fd sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/361900993.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1285969979-2222 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45635-SendThread(127.0.0.1:51389) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=45129 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x1ce9d870-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1460984227_17 at /127.0.0.1:53448 [Receiving block BP-16969666-172.31.14.131-1689549331350:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1285969979-2215 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45635-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp224989267-2279 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1335685857@qtp-840776046-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x26d2d4a2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/361900993.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2-prefix:jenkins-hbase4.apache.org,33109,1689549332577.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33109 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:43549 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/45635.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1185110845-2319 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1690772335_17 at /127.0.0.1:53414 [Receiving block BP-16969666-172.31.14.131-1689549331350:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 43549 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@29fef80a java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/dfs/data/data5/current/BP-16969666-172.31.14.131-1689549331350 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 759924984@qtp-1758651478-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39623 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x74a9c272-SendThread(127.0.0.1:51389) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x26d2d4a2-SendThread(127.0.0.1:51389) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 3 on default port 42591 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/45635-SendThread(127.0.0.1:51389) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=45129 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:39573 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 45635 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:37199 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: BP-16969666-172.31.14.131-1689549331350 heartbeating to localhost/127.0.0.1:43549 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 42591 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:43549 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:39573Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58149@0x256a9099-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS:2;jenkins-hbase4:33109-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:45129 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: ProcessThread(sid:0 cport:51389): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: qtp105611802-2586 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/41101-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-16969666-172.31.14.131-1689549331350:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x239401c3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/361900993.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/45635 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@70657f47 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp351670191-2311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:37199 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp407370416-2252 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@17beedf3 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp105611802-2589 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 42007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 45635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1014544846_17 at /127.0.0.1:53482 [Receiving block BP-16969666-172.31.14.131-1689549331350:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp407370416-2246 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 42591 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1014544846_17 at /127.0.0.1:53364 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 42591 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 1611262527@qtp-1758651478-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Listener at localhost/45635-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2-prefix:jenkins-hbase4.apache.org,33109,1689549332577 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45635-SendThread(127.0.0.1:51389) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: jenkins-hbase4:33109Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=45129 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58149@0x256a9099 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/361900993.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x185fde6c-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-16969666-172.31.14.131-1689549331350:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1014544846_17 at /127.0.0.1:51426 [Receiving block BP-16969666-172.31.14.131-1689549331350:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 43549 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-16969666-172.31.14.131-1689549331350:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp351670191-2308 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-16969666-172.31.14.131-1689549331350:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x788623fd-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1185110845-2320 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/45635.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x1ce9d870 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/361900993.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=33109 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x74a9c272-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp407370416-2251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1185110845-2321-acceptor-0@74af834f-ServerConnector@4864ad99{HTTP/1.1, (http/1.1)}{0.0.0.0:46855} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-16969666-172.31.14.131-1689549331350:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-16969666-172.31.14.131-1689549331350:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:37199 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:37199 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 42591 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@4fbf617b java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45129 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:0;jenkins-hbase4:39573-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-16969666-172.31.14.131-1689549331350 heartbeating to localhost/127.0.0.1:43549 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45129,1689549332096 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: Listener at localhost/45635-SendThread(127.0.0.1:51389) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@16db6d9f java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-16969666-172.31.14.131-1689549331350 heartbeating to localhost/127.0.0.1:43549 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x33538665-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=33109 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1690772335_17 at /127.0.0.1:51370 [Receiving block BP-16969666-172.31.14.131-1689549331350:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 42007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-671f63dd-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp105611802-2584 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp407370416-2250 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1968207771@qtp-1797518303-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Server handler 4 on default port 42007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-945371548_17 at /127.0.0.1:51400 [Receiving block BP-16969666-172.31.14.131-1689549331350:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x185fde6c-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689549332884 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: IPC Server handler 0 on default port 42007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:43549 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/45635-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-16969666-172.31.14.131-1689549331350:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x185fde6c-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41101-SendThread(127.0.0.1:58149) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: qtp351670191-2306 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x33538665-SendThread(127.0.0.1:51389) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Session-HouseKeeper-66eabf89-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@2adac310 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:37199 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1185110845-2318 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1014544846_17 at /127.0.0.1:53480 [Receiving block BP-16969666-172.31.14.131-1689549331350:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x1ce9d870-SendThread(127.0.0.1:51389) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x5b80f6a2-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: 1282220917@qtp-751501317-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1014544846_17 at /127.0.0.1:46804 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45635-SendThread(127.0.0.1:51389) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:43549 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 45635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:51389 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: 839417141@qtp-840776046-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38693 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x26d2d4a2-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x5b80f6a2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/361900993.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 519510461@qtp-1797518303-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45613 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-217a5542-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp407370416-2253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/dfs/data/data6/current/BP-16969666-172.31.14.131-1689549331350 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33109 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1285969979-2221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:37199 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x185fde6c-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp351670191-2307-acceptor-0@2d089bde-ServerConnector@69c9dac6{HTTP/1.1, (http/1.1)}{0.0.0.0:45039} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=45129 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 43549 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@28d7c2c1 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-945371548_17 at /127.0.0.1:53464 [Receiving block BP-16969666-172.31.14.131-1689549331350:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp224989267-2283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-16969666-172.31.14.131-1689549331350:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1014544846_17 at /127.0.0.1:46820 [Receiving block BP-16969666-172.31.14.131-1689549331350:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1460984227_17 at /127.0.0.1:46766 [Receiving block BP-16969666-172.31.14.131-1689549331350:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1285969979-2218 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp224989267-2280 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33109 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45129 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:37199 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-16969666-172.31.14.131-1689549331350:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x77684c01-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@6bf682b1 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:37649Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-16969666-172.31.14.131-1689549331350:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x33538665 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/361900993.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:43549 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS:1;jenkins-hbase4:37649-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33109 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:1;jenkins-hbase4:37649 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@37401243 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45635.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: PacketResponder: BP-16969666-172.31.14.131-1689549331350:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45635.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: hconnection-0x185fde6c-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:43549 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: 1682773933@qtp-751501317-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45961 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp224989267-2277-acceptor-0@52670717-ServerConnector@30f143f0{HTTP/1.1, (http/1.1)}{0.0.0.0:41265} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp351670191-2312 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:43549 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp224989267-2281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=45129 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/dfs/data/data2/current/BP-16969666-172.31.14.131-1689549331350 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1285969979-2217 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1014544846_17 at /127.0.0.1:51414 [Receiving block BP-16969666-172.31.14.131-1689549331350:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-16969666-172.31.14.131-1689549331350:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/dfs/data/data3/current/BP-16969666-172.31.14.131-1689549331350 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51389@0x239401c3-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) - Thread LEAK? -, OpenFileDescriptor=832 (was 781) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=444 (was 434) - SystemLoadAverage LEAK? -, ProcessCount=174 (was 176), AvailableMemoryMB=4665 (was 2755) - AvailableMemoryMB LEAK? - 2023-07-16 23:15:34,347 WARN [Listener at localhost/45635] hbase.ResourceChecker(130): Thread=563 is superior to 500 2023-07-16 23:15:34,364 INFO [Listener at localhost/45635] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=561, OpenFileDescriptor=832, MaxFileDescriptor=60000, SystemLoadAverage=444, ProcessCount=174, AvailableMemoryMB=4664 2023-07-16 23:15:34,364 WARN [Listener at localhost/45635] hbase.ResourceChecker(130): Thread=561 is superior to 500 2023-07-16 23:15:34,364 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-16 23:15:34,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:34,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:34,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:34,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:34,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:34,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:34,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:34,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:34,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:34,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:34,376 INFO [RS:3;jenkins-hbase4:35517] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35517%2C1689549334067, suffix=, logDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/WALs/jenkins-hbase4.apache.org,35517,1689549334067, archiveDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/oldWALs, maxLogs=32 2023-07-16 23:15:34,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:34,379 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:34,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:34,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:34,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:34,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:34,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:34,397 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35851,DS-1425cc32-22f2-4ace-81d9-6ff3f5abef70,DISK] 2023-07-16 23:15:34,398 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38277,DS-ff08177b-5d92-4d13-8401-e64693c8a26c,DISK] 2023-07-16 23:15:34,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:34,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:34,398 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39973,DS-973717e4-5fc5-4800-b515-00829bd200b6,DISK] 2023-07-16 23:15:34,400 INFO [RS:3;jenkins-hbase4:35517] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/WALs/jenkins-hbase4.apache.org,35517,1689549334067/jenkins-hbase4.apache.org%2C35517%2C1689549334067.1689549334376 2023-07-16 23:15:34,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45129] to rsgroup master 2023-07-16 23:15:34,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:34,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:35080 deadline: 1689550534400, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. 2023-07-16 23:15:34,401 DEBUG [RS:3;jenkins-hbase4:35517] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35851,DS-1425cc32-22f2-4ace-81d9-6ff3f5abef70,DISK], DatanodeInfoWithStorage[127.0.0.1:38277,DS-ff08177b-5d92-4d13-8401-e64693c8a26c,DISK], DatanodeInfoWithStorage[127.0.0.1:39973,DS-973717e4-5fc5-4800-b515-00829bd200b6,DISK]] 2023-07-16 23:15:34,401 WARN [Listener at localhost/45635] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:34,402 INFO [Listener at localhost/45635] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:34,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:34,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:34,403 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33109, jenkins-hbase4.apache.org:35517, jenkins-hbase4.apache.org:37649, jenkins-hbase4.apache.org:39573], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:34,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:34,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:34,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:34,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-16 23:15:34,408 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 23:15:34,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-16 23:15:34,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-16 23:15:34,410 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:34,410 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:34,410 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:34,412 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 23:15:34,414 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp/data/default/t1/8066bc9ade49fff858948d78febce49f 2023-07-16 23:15:34,414 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp/data/default/t1/8066bc9ade49fff858948d78febce49f empty. 2023-07-16 23:15:34,415 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp/data/default/t1/8066bc9ade49fff858948d78febce49f 2023-07-16 23:15:34,415 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-16 23:15:34,430 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-16 23:15:34,431 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8066bc9ade49fff858948d78febce49f, NAME => 't1,,1689549334405.8066bc9ade49fff858948d78febce49f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp 2023-07-16 23:15:34,442 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689549334405.8066bc9ade49fff858948d78febce49f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:34,442 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 8066bc9ade49fff858948d78febce49f, disabling compactions & flushes 2023-07-16 23:15:34,442 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689549334405.8066bc9ade49fff858948d78febce49f. 2023-07-16 23:15:34,442 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689549334405.8066bc9ade49fff858948d78febce49f. 2023-07-16 23:15:34,442 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689549334405.8066bc9ade49fff858948d78febce49f. after waiting 0 ms 2023-07-16 23:15:34,443 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689549334405.8066bc9ade49fff858948d78febce49f. 2023-07-16 23:15:34,443 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689549334405.8066bc9ade49fff858948d78febce49f. 2023-07-16 23:15:34,443 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 8066bc9ade49fff858948d78febce49f: 2023-07-16 23:15:34,445 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 23:15:34,446 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689549334405.8066bc9ade49fff858948d78febce49f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689549334445"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549334445"}]},"ts":"1689549334445"} 2023-07-16 23:15:34,447 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 23:15:34,447 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 23:15:34,448 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549334447"}]},"ts":"1689549334447"} 2023-07-16 23:15:34,448 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-16 23:15:34,452 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 23:15:34,452 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 23:15:34,452 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 23:15:34,452 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 23:15:34,452 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-16 23:15:34,452 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 23:15:34,452 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=8066bc9ade49fff858948d78febce49f, ASSIGN}] 2023-07-16 23:15:34,453 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=8066bc9ade49fff858948d78febce49f, ASSIGN 2023-07-16 23:15:34,454 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=8066bc9ade49fff858948d78febce49f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37649,1689549332425; forceNewPlan=false, retain=false 2023-07-16 23:15:34,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-16 23:15:34,604 INFO [jenkins-hbase4:45129] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 23:15:34,605 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=8066bc9ade49fff858948d78febce49f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:34,606 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689549334405.8066bc9ade49fff858948d78febce49f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689549334605"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549334605"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549334605"}]},"ts":"1689549334605"} 2023-07-16 23:15:34,607 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 8066bc9ade49fff858948d78febce49f, server=jenkins-hbase4.apache.org,37649,1689549332425}] 2023-07-16 23:15:34,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-16 23:15:34,760 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:34,760 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 23:15:34,762 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49372, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 23:15:34,767 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689549334405.8066bc9ade49fff858948d78febce49f. 2023-07-16 23:15:34,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8066bc9ade49fff858948d78febce49f, NAME => 't1,,1689549334405.8066bc9ade49fff858948d78febce49f.', STARTKEY => '', ENDKEY => ''} 2023-07-16 23:15:34,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 8066bc9ade49fff858948d78febce49f 2023-07-16 23:15:34,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689549334405.8066bc9ade49fff858948d78febce49f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 23:15:34,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8066bc9ade49fff858948d78febce49f 2023-07-16 23:15:34,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8066bc9ade49fff858948d78febce49f 2023-07-16 23:15:34,768 INFO [StoreOpener-8066bc9ade49fff858948d78febce49f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 8066bc9ade49fff858948d78febce49f 2023-07-16 23:15:34,770 DEBUG [StoreOpener-8066bc9ade49fff858948d78febce49f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/default/t1/8066bc9ade49fff858948d78febce49f/cf1 2023-07-16 23:15:34,770 DEBUG [StoreOpener-8066bc9ade49fff858948d78febce49f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/default/t1/8066bc9ade49fff858948d78febce49f/cf1 2023-07-16 23:15:34,770 INFO [StoreOpener-8066bc9ade49fff858948d78febce49f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8066bc9ade49fff858948d78febce49f columnFamilyName cf1 2023-07-16 23:15:34,771 INFO [StoreOpener-8066bc9ade49fff858948d78febce49f-1] regionserver.HStore(310): Store=8066bc9ade49fff858948d78febce49f/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 23:15:34,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/default/t1/8066bc9ade49fff858948d78febce49f 2023-07-16 23:15:34,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/default/t1/8066bc9ade49fff858948d78febce49f 2023-07-16 23:15:34,775 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8066bc9ade49fff858948d78febce49f 2023-07-16 23:15:34,777 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/default/t1/8066bc9ade49fff858948d78febce49f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 23:15:34,778 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8066bc9ade49fff858948d78febce49f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10529301600, jitterRate=-0.019382372498512268}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 23:15:34,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8066bc9ade49fff858948d78febce49f: 2023-07-16 23:15:34,779 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689549334405.8066bc9ade49fff858948d78febce49f., pid=14, masterSystemTime=1689549334760 2023-07-16 23:15:34,782 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689549334405.8066bc9ade49fff858948d78febce49f. 2023-07-16 23:15:34,783 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689549334405.8066bc9ade49fff858948d78febce49f. 2023-07-16 23:15:34,784 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=8066bc9ade49fff858948d78febce49f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:34,784 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689549334405.8066bc9ade49fff858948d78febce49f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689549334784"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689549334784"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689549334784"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689549334784"}]},"ts":"1689549334784"} 2023-07-16 23:15:34,786 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-16 23:15:34,786 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 8066bc9ade49fff858948d78febce49f, server=jenkins-hbase4.apache.org,37649,1689549332425 in 178 msec 2023-07-16 23:15:34,788 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-16 23:15:34,788 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=8066bc9ade49fff858948d78febce49f, ASSIGN in 334 msec 2023-07-16 23:15:34,788 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 23:15:34,788 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549334788"}]},"ts":"1689549334788"} 2023-07-16 23:15:34,789 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-16 23:15:34,799 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 23:15:34,800 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 393 msec 2023-07-16 23:15:35,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-16 23:15:35,012 INFO [Listener at localhost/45635] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-16 23:15:35,012 DEBUG [Listener at localhost/45635] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-16 23:15:35,012 INFO [Listener at localhost/45635] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:35,014 INFO [Listener at localhost/45635] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-16 23:15:35,015 INFO [Listener at localhost/45635] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:35,015 INFO [Listener at localhost/45635] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-16 23:15:35,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 23:15:35,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-16 23:15:35,019 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 23:15:35,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-16 23:15:35,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 352 connection: 172.31.14.131:35080 deadline: 1689549395016, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-16 23:15:35,022 INFO [Listener at localhost/45635] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:35,023 INFO [PEWorker-5] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=5 msec 2023-07-16 23:15:35,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:35,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:35,123 INFO [Listener at localhost/45635] client.HBaseAdmin$15(890): Started disable of t1 2023-07-16 23:15:35,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-16 23:15:35,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-16 23:15:35,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-16 23:15:35,127 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549335127"}]},"ts":"1689549335127"} 2023-07-16 23:15:35,128 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-16 23:15:35,130 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-16 23:15:35,130 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=8066bc9ade49fff858948d78febce49f, UNASSIGN}] 2023-07-16 23:15:35,131 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=8066bc9ade49fff858948d78febce49f, UNASSIGN 2023-07-16 23:15:35,131 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=8066bc9ade49fff858948d78febce49f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:35,132 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689549334405.8066bc9ade49fff858948d78febce49f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689549335131"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689549335131"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689549335131"}]},"ts":"1689549335131"} 2023-07-16 23:15:35,132 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 8066bc9ade49fff858948d78febce49f, server=jenkins-hbase4.apache.org,37649,1689549332425}] 2023-07-16 23:15:35,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-16 23:15:35,283 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8066bc9ade49fff858948d78febce49f 2023-07-16 23:15:35,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8066bc9ade49fff858948d78febce49f, disabling compactions & flushes 2023-07-16 23:15:35,284 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689549334405.8066bc9ade49fff858948d78febce49f. 2023-07-16 23:15:35,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689549334405.8066bc9ade49fff858948d78febce49f. 2023-07-16 23:15:35,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689549334405.8066bc9ade49fff858948d78febce49f. after waiting 0 ms 2023-07-16 23:15:35,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689549334405.8066bc9ade49fff858948d78febce49f. 2023-07-16 23:15:35,287 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/default/t1/8066bc9ade49fff858948d78febce49f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 23:15:35,288 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689549334405.8066bc9ade49fff858948d78febce49f. 2023-07-16 23:15:35,288 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8066bc9ade49fff858948d78febce49f: 2023-07-16 23:15:35,289 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8066bc9ade49fff858948d78febce49f 2023-07-16 23:15:35,290 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=8066bc9ade49fff858948d78febce49f, regionState=CLOSED 2023-07-16 23:15:35,290 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689549334405.8066bc9ade49fff858948d78febce49f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689549335290"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689549335290"}]},"ts":"1689549335290"} 2023-07-16 23:15:35,292 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-16 23:15:35,292 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 8066bc9ade49fff858948d78febce49f, server=jenkins-hbase4.apache.org,37649,1689549332425 in 159 msec 2023-07-16 23:15:35,299 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-16 23:15:35,299 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=8066bc9ade49fff858948d78febce49f, UNASSIGN in 162 msec 2023-07-16 23:15:35,299 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689549335299"}]},"ts":"1689549335299"} 2023-07-16 23:15:35,300 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-16 23:15:35,302 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-16 23:15:35,304 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 178 msec 2023-07-16 23:15:35,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-16 23:15:35,429 INFO [Listener at localhost/45635] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-16 23:15:35,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-16 23:15:35,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-16 23:15:35,432 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-16 23:15:35,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-16 23:15:35,433 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-16 23:15:35,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:35,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:35,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:35,437 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp/data/default/t1/8066bc9ade49fff858948d78febce49f 2023-07-16 23:15:35,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-16 23:15:35,439 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp/data/default/t1/8066bc9ade49fff858948d78febce49f/cf1, FileablePath, hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp/data/default/t1/8066bc9ade49fff858948d78febce49f/recovered.edits] 2023-07-16 23:15:35,443 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp/data/default/t1/8066bc9ade49fff858948d78febce49f/recovered.edits/4.seqid to hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/archive/data/default/t1/8066bc9ade49fff858948d78febce49f/recovered.edits/4.seqid 2023-07-16 23:15:35,444 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/.tmp/data/default/t1/8066bc9ade49fff858948d78febce49f 2023-07-16 23:15:35,444 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-16 23:15:35,446 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-16 23:15:35,447 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-16 23:15:35,449 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-16 23:15:35,450 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-16 23:15:35,450 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-16 23:15:35,450 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689549334405.8066bc9ade49fff858948d78febce49f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689549335450"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:35,451 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 23:15:35,451 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 8066bc9ade49fff858948d78febce49f, NAME => 't1,,1689549334405.8066bc9ade49fff858948d78febce49f.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 23:15:35,451 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-16 23:15:35,451 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689549335451"}]},"ts":"9223372036854775807"} 2023-07-16 23:15:35,452 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-16 23:15:35,456 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-16 23:15:35,457 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 26 msec 2023-07-16 23:15:35,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-16 23:15:35,539 INFO [Listener at localhost/45635] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-16 23:15:35,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:35,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:35,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:35,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:35,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:35,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:35,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:35,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:35,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:35,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:35,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:35,555 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:35,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:35,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:35,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:35,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:35,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:35,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:35,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:35,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45129] to rsgroup master 2023-07-16 23:15:35,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:35,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:35080 deadline: 1689550535566, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. 2023-07-16 23:15:35,567 WARN [Listener at localhost/45635] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:35,570 INFO [Listener at localhost/45635] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:35,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:35,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:35,571 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33109, jenkins-hbase4.apache.org:35517, jenkins-hbase4.apache.org:37649, jenkins-hbase4.apache.org:39573], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:35,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:35,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:35,592 INFO [Listener at localhost/45635] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=574 (was 561) - Thread LEAK? -, OpenFileDescriptor=844 (was 832) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=416 (was 444), ProcessCount=174 (was 174), AvailableMemoryMB=4648 (was 4664) 2023-07-16 23:15:35,592 WARN [Listener at localhost/45635] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-16 23:15:35,611 INFO [Listener at localhost/45635] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=574, OpenFileDescriptor=844, MaxFileDescriptor=60000, SystemLoadAverage=416, ProcessCount=174, AvailableMemoryMB=4647 2023-07-16 23:15:35,612 WARN [Listener at localhost/45635] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-16 23:15:35,612 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-16 23:15:35,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:35,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:35,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:35,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:35,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:35,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:35,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:35,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:35,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:35,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:35,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:35,625 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:35,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:35,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:35,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:35,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:35,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:35,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:35,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:35,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45129] to rsgroup master 2023-07-16 23:15:35,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:35,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35080 deadline: 1689550535633, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. 2023-07-16 23:15:35,634 WARN [Listener at localhost/45635] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:35,636 INFO [Listener at localhost/45635] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:35,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:35,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:35,637 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33109, jenkins-hbase4.apache.org:35517, jenkins-hbase4.apache.org:37649, jenkins-hbase4.apache.org:39573], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:35,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:35,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:35,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-16 23:15:35,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 23:15:35,640 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-16 23:15:35,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-16 23:15:35,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 23:15:35,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:35,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:35,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:35,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:35,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:35,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:35,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:35,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:35,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:35,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:35,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:35,659 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:35,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:35,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:35,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:35,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:35,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:35,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:35,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:35,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45129] to rsgroup master 2023-07-16 23:15:35,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:35,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35080 deadline: 1689550535668, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. 2023-07-16 23:15:35,668 WARN [Listener at localhost/45635] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:35,670 INFO [Listener at localhost/45635] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:35,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:35,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:35,671 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33109, jenkins-hbase4.apache.org:35517, jenkins-hbase4.apache.org:37649, jenkins-hbase4.apache.org:39573], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:35,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:35,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:35,691 INFO [Listener at localhost/45635] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=576 (was 574) - Thread LEAK? -, OpenFileDescriptor=844 (was 844), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=416 (was 416), ProcessCount=174 (was 174), AvailableMemoryMB=4648 (was 4647) - AvailableMemoryMB LEAK? - 2023-07-16 23:15:35,691 WARN [Listener at localhost/45635] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-16 23:15:35,711 INFO [Listener at localhost/45635] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=576, OpenFileDescriptor=844, MaxFileDescriptor=60000, SystemLoadAverage=416, ProcessCount=174, AvailableMemoryMB=4648 2023-07-16 23:15:35,711 WARN [Listener at localhost/45635] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-16 23:15:35,711 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-16 23:15:35,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:35,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:35,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:35,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:35,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:35,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:35,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:35,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:35,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:35,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:35,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:35,725 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:35,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:35,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:35,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:35,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:35,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:35,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:35,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:35,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45129] to rsgroup master 2023-07-16 23:15:35,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:35,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35080 deadline: 1689550535736, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. 2023-07-16 23:15:35,736 WARN [Listener at localhost/45635] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:35,738 INFO [Listener at localhost/45635] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:35,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:35,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:35,739 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33109, jenkins-hbase4.apache.org:35517, jenkins-hbase4.apache.org:37649, jenkins-hbase4.apache.org:39573], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:35,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:35,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:35,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:35,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:35,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:35,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:35,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:35,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:35,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:35,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:35,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:35,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:35,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:35,754 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:35,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:35,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:35,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:35,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:35,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:35,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:35,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:35,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45129] to rsgroup master 2023-07-16 23:15:35,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:35,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35080 deadline: 1689550535764, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. 2023-07-16 23:15:35,765 WARN [Listener at localhost/45635] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:35,767 INFO [Listener at localhost/45635] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:35,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:35,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:35,768 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33109, jenkins-hbase4.apache.org:35517, jenkins-hbase4.apache.org:37649, jenkins-hbase4.apache.org:39573], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:35,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:35,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:35,787 INFO [Listener at localhost/45635] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=577 (was 576) - Thread LEAK? -, OpenFileDescriptor=844 (was 844), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=416 (was 416), ProcessCount=174 (was 174), AvailableMemoryMB=4648 (was 4648) 2023-07-16 23:15:35,787 WARN [Listener at localhost/45635] hbase.ResourceChecker(130): Thread=577 is superior to 500 2023-07-16 23:15:35,815 INFO [Listener at localhost/45635] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=576, OpenFileDescriptor=836, MaxFileDescriptor=60000, SystemLoadAverage=416, ProcessCount=174, AvailableMemoryMB=4647 2023-07-16 23:15:35,815 WARN [Listener at localhost/45635] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-16 23:15:35,815 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-16 23:15:35,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:35,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:35,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:35,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:35,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:35,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:35,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:35,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:35,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:35,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:35,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:35,827 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:35,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:35,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:35,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:35,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:35,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:35,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:35,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:35,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45129] to rsgroup master 2023-07-16 23:15:35,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:35,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35080 deadline: 1689550535840, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. 2023-07-16 23:15:35,840 WARN [Listener at localhost/45635] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:35,842 INFO [Listener at localhost/45635] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:35,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:35,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:35,843 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33109, jenkins-hbase4.apache.org:35517, jenkins-hbase4.apache.org:37649, jenkins-hbase4.apache.org:39573], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:35,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:35,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:35,844 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-16 23:15:35,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-16 23:15:35,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-16 23:15:35,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:35,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:35,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 23:15:35,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:35,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:35,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:35,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-16 23:15:35,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-16 23:15:35,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 23:15:35,860 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 23:15:35,863 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-16 23:15:35,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 23:15:35,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-16 23:15:35,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:35,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:35080 deadline: 1689550535958, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-16 23:15:35,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-16 23:15:35,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-16 23:15:35,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-16 23:15:35,978 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-16 23:15:35,979 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 13 msec 2023-07-16 23:15:36,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-16 23:15:36,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-16 23:15:36,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-16 23:15:36,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:36,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-16 23:15:36,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:36,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 23:15:36,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:36,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:36,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:36,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-16 23:15:36,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 23:15:36,097 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 23:15:36,099 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 23:15:36,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-16 23:15:36,100 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 23:15:36,101 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-16 23:15:36,101 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 23:15:36,102 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 23:15:36,103 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 23:15:36,104 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-16 23:15:36,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-16 23:15:36,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-16 23:15:36,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-16 23:15:36,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:36,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:36,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-16 23:15:36,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:36,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:36,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:36,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:36,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:35080 deadline: 1689549396210, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-16 23:15:36,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:36,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:36,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:36,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:36,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:36,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:36,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:36,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-16 23:15:36,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:36,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:36,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 23:15:36,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:36,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 23:15:36,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 23:15:36,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 23:15:36,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 23:15:36,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 23:15:36,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 23:15:36,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:36,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 23:15:36,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 23:15:36,230 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 23:15:36,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 23:15:36,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 23:15:36,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 23:15:36,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 23:15:36,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 23:15:36,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:36,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:36,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45129] to rsgroup master 2023-07-16 23:15:36,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 23:15:36,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35080 deadline: 1689550536238, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. 2023-07-16 23:15:36,238 WARN [Listener at localhost/45635] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45129 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 23:15:36,240 INFO [Listener at localhost/45635] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 23:15:36,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 23:15:36,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 23:15:36,241 INFO [Listener at localhost/45635] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33109, jenkins-hbase4.apache.org:35517, jenkins-hbase4.apache.org:37649, jenkins-hbase4.apache.org:39573], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 23:15:36,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 23:15:36,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45129] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 23:15:36,259 INFO [Listener at localhost/45635] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=576 (was 576), OpenFileDescriptor=836 (was 836), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=416 (was 416), ProcessCount=176 (was 174) - ProcessCount LEAK? -, AvailableMemoryMB=4645 (was 4647) 2023-07-16 23:15:36,260 WARN [Listener at localhost/45635] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-16 23:15:36,260 INFO [Listener at localhost/45635] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-16 23:15:36,260 INFO [Listener at localhost/45635] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-16 23:15:36,260 DEBUG [Listener at localhost/45635] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x239401c3 to 127.0.0.1:51389 2023-07-16 23:15:36,260 DEBUG [Listener at localhost/45635] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:36,260 DEBUG [Listener at localhost/45635] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-16 23:15:36,260 DEBUG [Listener at localhost/45635] util.JVMClusterUtil(257): Found active master hash=354071383, stopped=false 2023-07-16 23:15:36,260 DEBUG [Listener at localhost/45635] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 23:15:36,260 DEBUG [Listener at localhost/45635] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 23:15:36,260 INFO [Listener at localhost/45635] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,45129,1689549332096 2023-07-16 23:15:36,262 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:36,262 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:36,262 INFO [Listener at localhost/45635] procedure2.ProcedureExecutor(629): Stopping 2023-07-16 23:15:36,262 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:36,262 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:36,262 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:36,262 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:35517-0x101706b6170000b, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 23:15:36,262 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:36,262 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:36,263 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:36,263 DEBUG [Listener at localhost/45635] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x33538665 to 127.0.0.1:51389 2023-07-16 23:15:36,263 DEBUG [Listener at localhost/45635] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:36,263 INFO [Listener at localhost/45635] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39573,1689549332276' ***** 2023-07-16 23:15:36,263 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35517-0x101706b6170000b, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:36,263 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 23:15:36,263 INFO [Listener at localhost/45635] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 23:15:36,263 INFO [Listener at localhost/45635] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37649,1689549332425' ***** 2023-07-16 23:15:36,263 INFO [Listener at localhost/45635] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 23:15:36,263 INFO [RS:0;jenkins-hbase4:39573] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 23:15:36,263 INFO [RS:1;jenkins-hbase4:37649] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 23:15:36,263 INFO [Listener at localhost/45635] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33109,1689549332577' ***** 2023-07-16 23:15:36,263 INFO [Listener at localhost/45635] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 23:15:36,265 INFO [Listener at localhost/45635] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35517,1689549334067' ***** 2023-07-16 23:15:36,265 INFO [Listener at localhost/45635] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 23:15:36,265 INFO [RS:3;jenkins-hbase4:35517] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 23:15:36,265 INFO [RS:2;jenkins-hbase4:33109] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 23:15:36,270 INFO [RS:3;jenkins-hbase4:35517] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6bdba9f2{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:15:36,270 INFO [RS:0;jenkins-hbase4:39573] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4b3db42d{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:15:36,270 INFO [RS:2;jenkins-hbase4:33109] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@77eff808{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:15:36,270 INFO [RS:1;jenkins-hbase4:37649] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@133cc24f{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 23:15:36,270 INFO [RS:3;jenkins-hbase4:35517] server.AbstractConnector(383): Stopped ServerConnector@4975709e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 23:15:36,270 INFO [RS:3;jenkins-hbase4:35517] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 23:15:36,271 INFO [RS:0;jenkins-hbase4:39573] server.AbstractConnector(383): Stopped ServerConnector@46225b1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 23:15:36,271 INFO [RS:2;jenkins-hbase4:33109] server.AbstractConnector(383): Stopped ServerConnector@69c9dac6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 23:15:36,272 INFO [RS:0;jenkins-hbase4:39573] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 23:15:36,272 INFO [RS:1;jenkins-hbase4:37649] server.AbstractConnector(383): Stopped ServerConnector@30f143f0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 23:15:36,272 INFO [RS:3;jenkins-hbase4:35517] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1da83b10{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 23:15:36,273 INFO [RS:0;jenkins-hbase4:39573] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@68a51037{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 23:15:36,273 INFO [RS:3;jenkins-hbase4:35517] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@68a8faa9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/hadoop.log.dir/,STOPPED} 2023-07-16 23:15:36,274 INFO [RS:0;jenkins-hbase4:39573] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@614b4af3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/hadoop.log.dir/,STOPPED} 2023-07-16 23:15:36,273 INFO [RS:1;jenkins-hbase4:37649] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 23:15:36,272 INFO [RS:2;jenkins-hbase4:33109] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 23:15:36,275 INFO [RS:1;jenkins-hbase4:37649] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@57997e99{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 23:15:36,276 INFO [RS:2;jenkins-hbase4:33109] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@399f6210{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 23:15:36,277 INFO [RS:2;jenkins-hbase4:33109] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@24cd1638{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/hadoop.log.dir/,STOPPED} 2023-07-16 23:15:36,277 INFO [RS:1;jenkins-hbase4:37649] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@41703287{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/hadoop.log.dir/,STOPPED} 2023-07-16 23:15:36,277 INFO [RS:3;jenkins-hbase4:35517] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 23:15:36,277 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 23:15:36,277 INFO [RS:3;jenkins-hbase4:35517] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 23:15:36,277 INFO [RS:3;jenkins-hbase4:35517] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 23:15:36,278 INFO [RS:3;jenkins-hbase4:35517] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35517,1689549334067 2023-07-16 23:15:36,278 INFO [RS:1;jenkins-hbase4:37649] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 23:15:36,278 INFO [RS:1;jenkins-hbase4:37649] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 23:15:36,278 INFO [RS:1;jenkins-hbase4:37649] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 23:15:36,278 INFO [RS:1;jenkins-hbase4:37649] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:36,278 DEBUG [RS:1;jenkins-hbase4:37649] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x26d2d4a2 to 127.0.0.1:51389 2023-07-16 23:15:36,278 DEBUG [RS:1;jenkins-hbase4:37649] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:36,278 INFO [RS:1;jenkins-hbase4:37649] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37649,1689549332425; all regions closed. 2023-07-16 23:15:36,278 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 23:15:36,278 DEBUG [RS:3;jenkins-hbase4:35517] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x788623fd to 127.0.0.1:51389 2023-07-16 23:15:36,278 INFO [RS:2;jenkins-hbase4:33109] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 23:15:36,278 INFO [RS:0;jenkins-hbase4:39573] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 23:15:36,278 DEBUG [RS:3;jenkins-hbase4:35517] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:36,279 INFO [RS:3;jenkins-hbase4:35517] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35517,1689549334067; all regions closed. 2023-07-16 23:15:36,280 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 23:15:36,280 INFO [RS:2;jenkins-hbase4:33109] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 23:15:36,280 INFO [RS:2;jenkins-hbase4:33109] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 23:15:36,280 INFO [RS:2;jenkins-hbase4:33109] regionserver.HRegionServer(3305): Received CLOSE for 783f0cc50654ddad1c9b50ae8b44cfa6 2023-07-16 23:15:36,280 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 23:15:36,280 INFO [RS:2;jenkins-hbase4:33109] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33109,1689549332577 2023-07-16 23:15:36,280 DEBUG [RS:2;jenkins-hbase4:33109] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1ce9d870 to 127.0.0.1:51389 2023-07-16 23:15:36,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 783f0cc50654ddad1c9b50ae8b44cfa6, disabling compactions & flushes 2023-07-16 23:15:36,280 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6. 2023-07-16 23:15:36,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6. 2023-07-16 23:15:36,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6. after waiting 0 ms 2023-07-16 23:15:36,280 DEBUG [RS:2;jenkins-hbase4:33109] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:36,281 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6. 2023-07-16 23:15:36,281 INFO [RS:2;jenkins-hbase4:33109] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 23:15:36,281 INFO [RS:2;jenkins-hbase4:33109] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 23:15:36,281 INFO [RS:2;jenkins-hbase4:33109] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 23:15:36,281 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 783f0cc50654ddad1c9b50ae8b44cfa6 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-16 23:15:36,281 INFO [RS:2;jenkins-hbase4:33109] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-16 23:15:36,281 INFO [RS:0;jenkins-hbase4:39573] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 23:15:36,281 INFO [RS:0;jenkins-hbase4:39573] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 23:15:36,281 INFO [RS:2;jenkins-hbase4:33109] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-16 23:15:36,281 DEBUG [RS:2;jenkins-hbase4:33109] regionserver.HRegionServer(1478): Online Regions={783f0cc50654ddad1c9b50ae8b44cfa6=hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6., 1588230740=hbase:meta,,1.1588230740} 2023-07-16 23:15:36,281 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 23:15:36,281 INFO [RS:0;jenkins-hbase4:39573] regionserver.HRegionServer(3305): Received CLOSE for 95e5611863563cc6568d4edec65b3ad1 2023-07-16 23:15:36,281 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 23:15:36,281 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 23:15:36,281 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 23:15:36,281 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 23:15:36,281 INFO [RS:0;jenkins-hbase4:39573] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39573,1689549332276 2023-07-16 23:15:36,281 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-16 23:15:36,281 DEBUG [RS:0;jenkins-hbase4:39573] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5b80f6a2 to 127.0.0.1:51389 2023-07-16 23:15:36,281 DEBUG [RS:0;jenkins-hbase4:39573] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:36,281 INFO [RS:0;jenkins-hbase4:39573] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-16 23:15:36,282 DEBUG [RS:0;jenkins-hbase4:39573] regionserver.HRegionServer(1478): Online Regions={95e5611863563cc6568d4edec65b3ad1=hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1.} 2023-07-16 23:15:36,282 DEBUG [RS:0;jenkins-hbase4:39573] regionserver.HRegionServer(1504): Waiting on 95e5611863563cc6568d4edec65b3ad1 2023-07-16 23:15:36,281 DEBUG [RS:2;jenkins-hbase4:33109] regionserver.HRegionServer(1504): Waiting on 1588230740, 783f0cc50654ddad1c9b50ae8b44cfa6 2023-07-16 23:15:36,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 95e5611863563cc6568d4edec65b3ad1, disabling compactions & flushes 2023-07-16 23:15:36,282 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1. 2023-07-16 23:15:36,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1. 2023-07-16 23:15:36,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1. after waiting 0 ms 2023-07-16 23:15:36,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1. 2023-07-16 23:15:36,282 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 95e5611863563cc6568d4edec65b3ad1 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-16 23:15:36,296 DEBUG [RS:1;jenkins-hbase4:37649] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/oldWALs 2023-07-16 23:15:36,296 INFO [RS:1;jenkins-hbase4:37649] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37649%2C1689549332425:(num 1689549333169) 2023-07-16 23:15:36,296 DEBUG [RS:3;jenkins-hbase4:35517] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/oldWALs 2023-07-16 23:15:36,296 DEBUG [RS:1;jenkins-hbase4:37649] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:36,296 INFO [RS:3;jenkins-hbase4:35517] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35517%2C1689549334067:(num 1689549334376) 2023-07-16 23:15:36,296 DEBUG [RS:3;jenkins-hbase4:35517] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:36,296 INFO [RS:1;jenkins-hbase4:37649] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:36,296 INFO [RS:3;jenkins-hbase4:35517] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:36,297 INFO [RS:1;jenkins-hbase4:37649] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 23:15:36,297 INFO [RS:3;jenkins-hbase4:35517] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 23:15:36,297 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 23:15:36,297 INFO [RS:3;jenkins-hbase4:35517] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 23:15:36,297 INFO [RS:3;jenkins-hbase4:35517] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 23:15:36,297 INFO [RS:3;jenkins-hbase4:35517] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 23:15:36,297 INFO [RS:1;jenkins-hbase4:37649] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 23:15:36,297 INFO [RS:1;jenkins-hbase4:37649] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 23:15:36,297 INFO [RS:1;jenkins-hbase4:37649] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 23:15:36,297 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 23:15:36,299 INFO [RS:3;jenkins-hbase4:35517] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35517 2023-07-16 23:15:36,299 INFO [RS:1;jenkins-hbase4:37649] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37649 2023-07-16 23:15:36,301 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35517,1689549334067 2023-07-16 23:15:36,301 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35517,1689549334067 2023-07-16 23:15:36,301 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:36,301 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:36,301 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:35517-0x101706b6170000b, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35517,1689549334067 2023-07-16 23:15:36,301 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:36,301 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35517,1689549334067 2023-07-16 23:15:36,301 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:35517-0x101706b6170000b, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:36,301 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:36,302 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:35517-0x101706b6170000b, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:36,302 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:36,302 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:36,302 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37649,1689549332425] 2023-07-16 23:15:36,302 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37649,1689549332425; numProcessing=1 2023-07-16 23:15:36,305 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37649,1689549332425 already deleted, retry=false 2023-07-16 23:15:36,305 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37649,1689549332425 expired; onlineServers=3 2023-07-16 23:15:36,305 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35517,1689549334067] 2023-07-16 23:15:36,305 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35517,1689549334067; numProcessing=2 2023-07-16 23:15:36,305 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37649,1689549332425 2023-07-16 23:15:36,307 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35517,1689549334067 already deleted, retry=false 2023-07-16 23:15:36,307 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35517,1689549334067 expired; onlineServers=2 2023-07-16 23:15:36,327 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/rsgroup/783f0cc50654ddad1c9b50ae8b44cfa6/.tmp/m/f23be3aa62994ef382bad4d7144fd32e 2023-07-16 23:15:36,334 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:36,336 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/.tmp/info/89d767bd81d4446db77ce837f16cd957 2023-07-16 23:15:36,338 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:36,344 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f23be3aa62994ef382bad4d7144fd32e 2023-07-16 23:15:36,345 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/rsgroup/783f0cc50654ddad1c9b50ae8b44cfa6/.tmp/m/f23be3aa62994ef382bad4d7144fd32e as hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/rsgroup/783f0cc50654ddad1c9b50ae8b44cfa6/m/f23be3aa62994ef382bad4d7144fd32e 2023-07-16 23:15:36,353 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:36,353 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:36,355 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/namespace/95e5611863563cc6568d4edec65b3ad1/.tmp/info/5d9154825a734183a3df1e712217ff0d 2023-07-16 23:15:36,359 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f23be3aa62994ef382bad4d7144fd32e 2023-07-16 23:15:36,359 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/rsgroup/783f0cc50654ddad1c9b50ae8b44cfa6/m/f23be3aa62994ef382bad4d7144fd32e, entries=12, sequenceid=29, filesize=5.4 K 2023-07-16 23:15:36,360 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 89d767bd81d4446db77ce837f16cd957 2023-07-16 23:15:36,360 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 783f0cc50654ddad1c9b50ae8b44cfa6 in 79ms, sequenceid=29, compaction requested=false 2023-07-16 23:15:36,366 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5d9154825a734183a3df1e712217ff0d 2023-07-16 23:15:36,366 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/namespace/95e5611863563cc6568d4edec65b3ad1/.tmp/info/5d9154825a734183a3df1e712217ff0d as hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/namespace/95e5611863563cc6568d4edec65b3ad1/info/5d9154825a734183a3df1e712217ff0d 2023-07-16 23:15:36,380 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5d9154825a734183a3df1e712217ff0d 2023-07-16 23:15:36,381 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/namespace/95e5611863563cc6568d4edec65b3ad1/info/5d9154825a734183a3df1e712217ff0d, entries=3, sequenceid=9, filesize=5.0 K 2023-07-16 23:15:36,381 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 95e5611863563cc6568d4edec65b3ad1 in 99ms, sequenceid=9, compaction requested=false 2023-07-16 23:15:36,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/rsgroup/783f0cc50654ddad1c9b50ae8b44cfa6/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-16 23:15:36,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 23:15:36,391 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6. 2023-07-16 23:15:36,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 783f0cc50654ddad1c9b50ae8b44cfa6: 2023-07-16 23:15:36,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689549333480.783f0cc50654ddad1c9b50ae8b44cfa6. 2023-07-16 23:15:36,395 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/namespace/95e5611863563cc6568d4edec65b3ad1/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-16 23:15:36,396 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1. 2023-07-16 23:15:36,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 95e5611863563cc6568d4edec65b3ad1: 2023-07-16 23:15:36,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689549333327.95e5611863563cc6568d4edec65b3ad1. 2023-07-16 23:15:36,402 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/.tmp/rep_barrier/fa6b346c9ceb4db4931d426032d36749 2023-07-16 23:15:36,406 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:35517-0x101706b6170000b, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:36,406 INFO [RS:3;jenkins-hbase4:35517] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35517,1689549334067; zookeeper connection closed. 2023-07-16 23:15:36,406 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:35517-0x101706b6170000b, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:36,407 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fa6b346c9ceb4db4931d426032d36749 2023-07-16 23:15:36,407 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@178ae42e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@178ae42e 2023-07-16 23:15:36,420 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/.tmp/table/25deb51f89634a529f8508cef4c8e29b 2023-07-16 23:15:36,426 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 25deb51f89634a529f8508cef4c8e29b 2023-07-16 23:15:36,427 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/.tmp/info/89d767bd81d4446db77ce837f16cd957 as hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/info/89d767bd81d4446db77ce837f16cd957 2023-07-16 23:15:36,432 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 89d767bd81d4446db77ce837f16cd957 2023-07-16 23:15:36,432 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/info/89d767bd81d4446db77ce837f16cd957, entries=22, sequenceid=26, filesize=7.3 K 2023-07-16 23:15:36,433 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/.tmp/rep_barrier/fa6b346c9ceb4db4931d426032d36749 as hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/rep_barrier/fa6b346c9ceb4db4931d426032d36749 2023-07-16 23:15:36,439 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fa6b346c9ceb4db4931d426032d36749 2023-07-16 23:15:36,439 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/rep_barrier/fa6b346c9ceb4db4931d426032d36749, entries=1, sequenceid=26, filesize=4.9 K 2023-07-16 23:15:36,440 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/.tmp/table/25deb51f89634a529f8508cef4c8e29b as hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/table/25deb51f89634a529f8508cef4c8e29b 2023-07-16 23:15:36,445 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 25deb51f89634a529f8508cef4c8e29b 2023-07-16 23:15:36,445 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/table/25deb51f89634a529f8508cef4c8e29b, entries=6, sequenceid=26, filesize=5.1 K 2023-07-16 23:15:36,446 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 165ms, sequenceid=26, compaction requested=false 2023-07-16 23:15:36,457 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-16 23:15:36,458 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 23:15:36,459 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 23:15:36,459 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 23:15:36,459 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-16 23:15:36,462 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:36,462 INFO [RS:1;jenkins-hbase4:37649] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37649,1689549332425; zookeeper connection closed. 2023-07-16 23:15:36,462 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:37649-0x101706b61700002, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:36,462 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@30b26ddb] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@30b26ddb 2023-07-16 23:15:36,482 INFO [RS:0;jenkins-hbase4:39573] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39573,1689549332276; all regions closed. 2023-07-16 23:15:36,482 INFO [RS:2;jenkins-hbase4:33109] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33109,1689549332577; all regions closed. 2023-07-16 23:15:36,490 DEBUG [RS:0;jenkins-hbase4:39573] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/oldWALs 2023-07-16 23:15:36,490 INFO [RS:0;jenkins-hbase4:39573] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39573%2C1689549332276:(num 1689549333169) 2023-07-16 23:15:36,490 DEBUG [RS:0;jenkins-hbase4:39573] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:36,490 INFO [RS:0;jenkins-hbase4:39573] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:36,490 DEBUG [RS:2;jenkins-hbase4:33109] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/oldWALs 2023-07-16 23:15:36,490 INFO [RS:2;jenkins-hbase4:33109] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33109%2C1689549332577.meta:.meta(num 1689549333267) 2023-07-16 23:15:36,490 INFO [RS:0;jenkins-hbase4:39573] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 23:15:36,491 INFO [RS:0;jenkins-hbase4:39573] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 23:15:36,491 INFO [RS:0;jenkins-hbase4:39573] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 23:15:36,491 INFO [RS:0;jenkins-hbase4:39573] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 23:15:36,491 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 23:15:36,492 INFO [RS:0;jenkins-hbase4:39573] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39573 2023-07-16 23:15:36,494 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:36,495 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39573,1689549332276 2023-07-16 23:15:36,495 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/WALs/jenkins-hbase4.apache.org,33109,1689549332577/jenkins-hbase4.apache.org%2C33109%2C1689549332577.1689549333178 not finished, retry = 0 2023-07-16 23:15:36,495 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39573,1689549332276] 2023-07-16 23:15:36,495 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39573,1689549332276 2023-07-16 23:15:36,496 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39573,1689549332276; numProcessing=3 2023-07-16 23:15:36,498 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39573,1689549332276 already deleted, retry=false 2023-07-16 23:15:36,499 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39573,1689549332276 expired; onlineServers=1 2023-07-16 23:15:36,598 DEBUG [RS:2;jenkins-hbase4:33109] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/oldWALs 2023-07-16 23:15:36,598 INFO [RS:2;jenkins-hbase4:33109] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33109%2C1689549332577:(num 1689549333178) 2023-07-16 23:15:36,599 DEBUG [RS:2;jenkins-hbase4:33109] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:36,599 INFO [RS:2;jenkins-hbase4:33109] regionserver.LeaseManager(133): Closed leases 2023-07-16 23:15:36,599 INFO [RS:2;jenkins-hbase4:33109] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 23:15:36,599 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 23:15:36,600 INFO [RS:2;jenkins-hbase4:33109] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33109 2023-07-16 23:15:36,602 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33109,1689549332577 2023-07-16 23:15:36,602 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 23:15:36,603 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33109,1689549332577] 2023-07-16 23:15:36,603 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33109,1689549332577; numProcessing=4 2023-07-16 23:15:36,607 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33109,1689549332577 already deleted, retry=false 2023-07-16 23:15:36,607 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33109,1689549332577 expired; onlineServers=0 2023-07-16 23:15:36,607 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45129,1689549332096' ***** 2023-07-16 23:15:36,607 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-16 23:15:36,608 DEBUG [M:0;jenkins-hbase4:45129] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1125352c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 23:15:36,608 INFO [M:0;jenkins-hbase4:45129] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 23:15:36,610 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-16 23:15:36,610 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 23:15:36,610 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 23:15:36,610 INFO [M:0;jenkins-hbase4:45129] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1e20b2e1{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-16 23:15:36,610 INFO [M:0;jenkins-hbase4:45129] server.AbstractConnector(383): Stopped ServerConnector@5fb63076{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 23:15:36,610 INFO [M:0;jenkins-hbase4:45129] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 23:15:36,611 INFO [M:0;jenkins-hbase4:45129] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@32bc0174{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 23:15:36,612 INFO [M:0;jenkins-hbase4:45129] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@157e3680{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/hadoop.log.dir/,STOPPED} 2023-07-16 23:15:36,612 INFO [M:0;jenkins-hbase4:45129] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45129,1689549332096 2023-07-16 23:15:36,612 INFO [M:0;jenkins-hbase4:45129] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45129,1689549332096; all regions closed. 2023-07-16 23:15:36,612 DEBUG [M:0;jenkins-hbase4:45129] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 23:15:36,612 INFO [M:0;jenkins-hbase4:45129] master.HMaster(1491): Stopping master jetty server 2023-07-16 23:15:36,613 INFO [M:0;jenkins-hbase4:45129] server.AbstractConnector(383): Stopped ServerConnector@4864ad99{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 23:15:36,613 DEBUG [M:0;jenkins-hbase4:45129] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-16 23:15:36,613 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-16 23:15:36,613 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689549332884] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689549332884,5,FailOnTimeoutGroup] 2023-07-16 23:15:36,613 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689549332884] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689549332884,5,FailOnTimeoutGroup] 2023-07-16 23:15:36,613 DEBUG [M:0;jenkins-hbase4:45129] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-16 23:15:36,613 INFO [M:0;jenkins-hbase4:45129] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-16 23:15:36,613 INFO [M:0;jenkins-hbase4:45129] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-16 23:15:36,613 INFO [M:0;jenkins-hbase4:45129] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-16 23:15:36,613 DEBUG [M:0;jenkins-hbase4:45129] master.HMaster(1512): Stopping service threads 2023-07-16 23:15:36,613 INFO [M:0;jenkins-hbase4:45129] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-16 23:15:36,614 ERROR [M:0;jenkins-hbase4:45129] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-16 23:15:36,614 INFO [M:0;jenkins-hbase4:45129] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-16 23:15:36,614 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-16 23:15:36,614 DEBUG [M:0;jenkins-hbase4:45129] zookeeper.ZKUtil(398): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-16 23:15:36,614 WARN [M:0;jenkins-hbase4:45129] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-16 23:15:36,614 INFO [M:0;jenkins-hbase4:45129] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-16 23:15:36,614 INFO [M:0;jenkins-hbase4:45129] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-16 23:15:36,614 DEBUG [M:0;jenkins-hbase4:45129] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 23:15:36,614 INFO [M:0;jenkins-hbase4:45129] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:36,614 DEBUG [M:0;jenkins-hbase4:45129] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:36,614 DEBUG [M:0;jenkins-hbase4:45129] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 23:15:36,614 DEBUG [M:0;jenkins-hbase4:45129] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:36,615 INFO [M:0;jenkins-hbase4:45129] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.18 KB heapSize=90.62 KB 2023-07-16 23:15:36,626 INFO [M:0;jenkins-hbase4:45129] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.18 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/3d501d76c92a4af38366edd17cc0cc1e 2023-07-16 23:15:36,631 DEBUG [M:0;jenkins-hbase4:45129] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/3d501d76c92a4af38366edd17cc0cc1e as hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/3d501d76c92a4af38366edd17cc0cc1e 2023-07-16 23:15:36,636 INFO [M:0;jenkins-hbase4:45129] regionserver.HStore(1080): Added hdfs://localhost:43549/user/jenkins/test-data/ca11a3a7-366c-e0f6-f7d5-9abf34eabdb2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/3d501d76c92a4af38366edd17cc0cc1e, entries=22, sequenceid=175, filesize=11.1 K 2023-07-16 23:15:36,637 INFO [M:0;jenkins-hbase4:45129] regionserver.HRegion(2948): Finished flush of dataSize ~76.18 KB/78012, heapSize ~90.60 KB/92776, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 22ms, sequenceid=175, compaction requested=false 2023-07-16 23:15:36,638 INFO [M:0;jenkins-hbase4:45129] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 23:15:36,638 DEBUG [M:0;jenkins-hbase4:45129] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 23:15:36,643 INFO [M:0;jenkins-hbase4:45129] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-16 23:15:36,643 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 23:15:36,643 INFO [M:0;jenkins-hbase4:45129] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45129 2023-07-16 23:15:36,645 DEBUG [M:0;jenkins-hbase4:45129] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,45129,1689549332096 already deleted, retry=false 2023-07-16 23:15:37,063 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:37,063 INFO [M:0;jenkins-hbase4:45129] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45129,1689549332096; zookeeper connection closed. 2023-07-16 23:15:37,063 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): master:45129-0x101706b61700000, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:37,164 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:37,164 INFO [RS:2;jenkins-hbase4:33109] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33109,1689549332577; zookeeper connection closed. 2023-07-16 23:15:37,164 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:33109-0x101706b61700003, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:37,164 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@60321e17] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@60321e17 2023-07-16 23:15:37,264 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:37,264 INFO [RS:0;jenkins-hbase4:39573] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39573,1689549332276; zookeeper connection closed. 2023-07-16 23:15:37,264 DEBUG [Listener at localhost/45635-EventThread] zookeeper.ZKWatcher(600): regionserver:39573-0x101706b61700001, quorum=127.0.0.1:51389, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 23:15:37,264 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@25bef6be] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@25bef6be 2023-07-16 23:15:37,264 INFO [Listener at localhost/45635] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-16 23:15:37,265 WARN [Listener at localhost/45635] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 23:15:37,268 INFO [Listener at localhost/45635] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 23:15:37,372 WARN [BP-16969666-172.31.14.131-1689549331350 heartbeating to localhost/127.0.0.1:43549] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 23:15:37,372 WARN [BP-16969666-172.31.14.131-1689549331350 heartbeating to localhost/127.0.0.1:43549] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-16969666-172.31.14.131-1689549331350 (Datanode Uuid b391c26e-f0ff-4650-88eb-d68c4700f58c) service to localhost/127.0.0.1:43549 2023-07-16 23:15:37,372 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/dfs/data/data5/current/BP-16969666-172.31.14.131-1689549331350] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 23:15:37,373 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/dfs/data/data6/current/BP-16969666-172.31.14.131-1689549331350] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 23:15:37,374 WARN [Listener at localhost/45635] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 23:15:37,377 INFO [Listener at localhost/45635] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 23:15:37,480 WARN [BP-16969666-172.31.14.131-1689549331350 heartbeating to localhost/127.0.0.1:43549] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 23:15:37,480 WARN [BP-16969666-172.31.14.131-1689549331350 heartbeating to localhost/127.0.0.1:43549] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-16969666-172.31.14.131-1689549331350 (Datanode Uuid 1d13b32b-9886-4cd6-b4cb-3e7d329f625a) service to localhost/127.0.0.1:43549 2023-07-16 23:15:37,481 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/dfs/data/data3/current/BP-16969666-172.31.14.131-1689549331350] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 23:15:37,481 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/dfs/data/data4/current/BP-16969666-172.31.14.131-1689549331350] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 23:15:37,482 WARN [Listener at localhost/45635] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 23:15:37,485 INFO [Listener at localhost/45635] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 23:15:37,588 WARN [BP-16969666-172.31.14.131-1689549331350 heartbeating to localhost/127.0.0.1:43549] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 23:15:37,589 WARN [BP-16969666-172.31.14.131-1689549331350 heartbeating to localhost/127.0.0.1:43549] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-16969666-172.31.14.131-1689549331350 (Datanode Uuid 4969de56-b7d8-455c-8b67-b95c8e88d30a) service to localhost/127.0.0.1:43549 2023-07-16 23:15:37,589 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/dfs/data/data1/current/BP-16969666-172.31.14.131-1689549331350] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 23:15:37,590 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/413d76b4-2e44-dfb4-1db9-e439aae3ec87/cluster_ff9e018c-c5e7-b6ae-98b9-e04da4323288/dfs/data/data2/current/BP-16969666-172.31.14.131-1689549331350] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 23:15:37,600 INFO [Listener at localhost/45635] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 23:15:37,721 INFO [Listener at localhost/45635] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-16 23:15:37,755 INFO [Listener at localhost/45635] hbase.HBaseTestingUtility(1293): Minicluster is down